DOMAIN 1
Cloud Concepts, Architecture, and Design

FOUNDATIONAL TO THE UNDERSTANDING and use of the cloud and cloud computing is the information found in Domain 1. This information is fundamental for all other topics in cloud computing. A set of common definitions, architectural standards, and design patterns will put everyone on the same level when discussing these ideas and using the cloud effectively and efficiently.

UNDERSTAND CLOUD COMPUTING CONCEPTS

The first task is to define common concepts. In the following sections, we will provide common definitions for cloud computing terms and will discuss the various participants in the cloud computing ecosystem. We will also discuss the characteristics of cloud computing, answering the question “What is cloud computing?” We will also examine the technologies that make cloud computing possible.

Cloud Computing Definitions

The basic concepts of cloud computing, service models, and deployment models form the foundation of cloud computing practice. It is essential to understand each of them.

Cloud Computing

In NIST SP 800-145, cloud computing is defined as “a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources… . that can be rapidly provisioned and released with minimal management effort or service provider interaction.”

Cloud computing is more than distributed computing or parallel computing even when done over a network (local area network or Internet). It is a philosophy that creates access to computing resources in a simple, self-driven way. If an individual has to call up the vendor and negotiate a contract for a fixed service, it is probably not cloud computing. Similarly, a company may negotiate rates and services in a cloud environment. But, the provisioning of services must not require ongoing involvement by the vendor.

Cloud computing requires a network in order to provide broad access to infrastructure, development tools, and software solutions. It requires some form of self-service to allow users to reserve and access these resources at times and in ways that are convenient to the user.

The provisioning of resources needs to be automated so that human involvement is limited. Any user should be able to access their account and procure additional resources or reduce current resource levels by themselves.

An example is Dropbox, a cloud-based file storage system. An individual creates an account, chooses the level of service they want or need, and provides payment information, and then the service and storage are immediately available. A company might negotiate contract rates more favorable than are available to the average consumer. But, once the contract is in place, the employees access this resource in much the same way as an individual user of this service.

Service Models

There are three service models: software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS). These models determine the type of user the cloud service is designed for: end users, developers, or system administrators.

The different service models also dictate the level of control over software applications, operating systems, networking, and other components. The least control for the end user exists in the SaaS model, with only basic configuration controls available, if any. The most control for the end user is the IaaS model where operating system selection and configuration, patching, and software tools and applications are under the control of the end user.

When the service is provided to a company, the distinction can be less clear. While a PaaS may be intended for use by developers, there may be some administration of the service by the company as well. In fact, the lines often blur when a corporation enters into a business relationship with a cloud provider but does much of the provisioning and administrative work in house.

For example, Office 365 can be considered a SaaS solution, and to the individual consumer there is little or no administrative overhead. But, if a company contracts for Office 365, they may in fact administer the system, overseeing account provisioning, system monitoring, and other tasks that would be the domain of developers and administrators.

Deployment Models

There are four deployment models: public, private, community, and hybrid clouds. These define who owns and controls the underlying infrastructure of a cloud service and who can access a specific cloud service.

A public cloud deployment makes resources available for anyone who chooses to create an account and purchase access to the service. A service like Dropbox is available to the public SaaS deployment. Accounts on various cloud service providers such as Amazon Web Services (AWS), Google, and IBM Cloud are also public deployments of services.

A private cloud deployment consists of a set of cloud resources for a single organization (business, non-profit, etc.). The cloud may be located on-premise in the organization’s data center or may be in a single tenant cloud environment provided by a CSP. The services (SaaS, PaaS, or IaaS) are available solely to that organization. You get many of the advantages of a cloud such as the on-demand resources and minimal management effort. However, the company still owns the infrastructure. This can provide the benefits of cloud computing for files and data that are too sensitive to put on a public cloud.

A community cloud is most similar to a public cloud. It is a cloud deployment for a related group of companies or individuals such as a consortium of universities or a group of local or state governments. The cloud may be implemented in one of the organizations, with services provided to all members. Or, it can be implemented in an infrastructure like AWS or Google. However, access to the cloud resources is available only to the members of the group.

A hybrid cloud is any combination of these. A company may have a private cloud that accesses public cloud resources for some of its functions. The hybrid cloud allows the organization of cloud resources in whatever way makes the most sense to the organization. Private individuals are not usually involved in a hybrid cloud. This is because few individuals have their own private cloud or belong to a community cloud as individuals.

These concepts will be discussed further in the “Cloud Deployment Models” section later in this chapter.

Cloud Computing Roles

There are a number of roles in cloud computing, and understanding each role allows clearer understanding of each of the cloud service models, deployment models, security responsibilities, and other aspects of cloud computing.

Cloud Service Customer

The cloud service customer (CSC) is the company or person purchasing the cloud service, or in the case of an internal customer, the employee using the cloud service. For example, a SaaS CSC would be any individual or organization that subscribes to a cloud-based email service. A PaaS CSC would be an individual or organization subscribing to a PaaS resource. A PaaS resource could be a development platform. With an IaaS solution, the customer is a system administrator who needs infrastructure to support their enterprise. In a very real sense, the customer is the individual the particular service model was created to support.

Cloud Service Provider

The cloud service provider (CSP) is the company or other entity offering cloud services. A CSP may offer SaaS, PaaS, or IaaS services in any combination. For example, major CSPs such as AWS, Microsoft Azure, and Google Cloud offer both PaaS and IaaS services.

Depending on the service provided (SaaS, PaaS, or IaaS), the responsibilities of the CSP vary considerably. In all cases, security in the cloud is a shared responsibility between the CSP and the customer. This shared responsibility is a continuum, with the customer taking a larger security role in an IaaS service model and the CSP taking a larger role in the security in a SaaS service model. The responsibilities of a PaaS fall somewhere in between. But even when a CSP has most of the responsibility in a SaaS solution, the customer is ultimately responsible for the data and processes they put into the cloud.

The basic infrastructure is the responsibility of the CSP, including the overall security of the cloud environment and the infrastructure components provided. This would include responsibilities such as physical security of data centers. For example, AWS is always responsible for securing the AWS Cloud environment. The customer is responsible for the security of what they do in the cloud. The customer has ultimate responsibility for the security of their customer and other sensitive data and how they use the cloud and cloud components. The CSP may provide many security services, but the customer may choose not to use some or all of those services.

As the cloud environment becomes more complicated, with hybrid clouds and community clouds that federate across multiple cloud environments, the responsibility for security becomes ever more complex. As the customer owns their data and processes, they have a responsibility to review the security policies and procedures of the CSP, and the federated responsibilities that may exist between multiple CSPs and data centers.

Cloud Service Partner

A cloud service partner is a third party offering a variety of cloud-based services (infrastructure, storage and application services, and platform services) using the associated CSP. An AWS cloud service partner uses AWS to provide their services. The cloud service partner can provide customized interfaces, load balancing, and a variety of services. It may be an easier entrance to cloud computing, as an existing customer vendor may already be a cloud service partner. The partner has experience with the underlying CSP and can introduce a customer to the cloud more easily.

The cloud partner network is also a way to extend the reach of a CSP. The cloud service partner will brand its association with the CSP. Some partners align with multiple CSPs, giving the customer a great deal of flexibility.

Some partners provide their own or most of their own infrastructure and extend the service areas they can reach through the use of partnerships. For example, Dropbox extends its reach to service areas where it does not have infrastructure through a continued partnership with AWS. This also allows Dropbox to expand beyond what its own infrastructure will currently handle.

Cloud Service Broker

A cloud service broker is similar to a broker in any industry. Companies use a broker to find solutions to their cloud computing needs. The broker will package services in a manner that benefits the customer. This may involve the services of multiple CSPs. A broker is a value-add service and can be an easy way for a company to begin a move into the cloud. A broker adds value through aggregation of services from multiple parties, integration of services with a company's existing infrastructure, and customization of services that a CSP cannot or will not make.

In both cases, or with one of the dozens of other CSBs, it is important to thoroughly vet the CSB as you would any new vendor. Each serves a specific market, utilizing different cloud technologies. It is important that the CSBs selected are a good fit for the customer organization and its cloud strategy.

Key Cloud Computing Characteristics

The NIST definition of cloud computing describes certain characteristics that clouds share. Not every third-party solution is a cloud solution. Understanding the key characteristics of cloud computing will allow you to distinguish between cloud solutions and noncloud solutions. This is important as these characteristics result in certain security challenges that may not be shared by noncloud solutions.

On-Demand Self-Service

The NIST definition of cloud computing identifies an on-demand service as one “that can be rapidly provisioned and released with minimal management effort or service provider interaction.” This means the user must be able to provision these services simply and easily when they are needed. If you need a Dropbox account, you simply set up an account and pay for the amount of storage you want, and you have that storage capacity nearly immediately. If you already have an account, you can expand the space you need by simply paying for more space. The access to storage space is on demand. Neither creating an account nor expanding the amount of storage available requires the involvement of people other than the customer. This access is automated and provided via a dashboard or other simple interface.

This can facilitate the poor practice often labeled as shadow IT. The ease with which a service can be provisioned makes it easy for an individual, team, or department to bypass company policies and procedures that handle the provisioning and control of IT services. A team that wants to collaborate may choose OneDrive, Dropbox, SharePoint, or another service to facilitate collaboration. This can lead to sensitive data being stored in locations that do not adhere to required corporate controls and places the data in locations the larger business is unaware of and cannot adequately protect.

The pricing of these services may fall below corporate spending limits that would otherwise trigger involvement of the vendor management office (VMO) and information security and may simply be placed on a purchase card rather than through an invoice and vendor contract. Without VMO involvement, the corporate master services agreement will not be in effect.

If this behavior is allowed to proliferate, the organization can lose control of its sensitive data and processes. For example, the actuary department at an insurance company may decide to create a file-sharing account on one of several available services. As information security was not involved, company policies, procedures, risk management, and controls programs are not followed. As this is not monitored by the security operations center (SOC), a data breach may go unnoticed, and the data that gives the company a competitive advantage could be stolen, altered, or deleted.

Broad Network Access

Cloud services assume the presence of a network. For public and community clouds, this is the Internet. For a private cloud, it could be the corporate network—generally an IP-based network. In either case, cloud services are not local solutions stored on your individual computer. They are solutions that exist on a network—in the cloud. Without broad and ubiquitous network access, the cloud becomes inaccessible and is no longer useful.

Not all protocols and services on IP-based networks are secure. Part of the strategy to implementing a secure cloud solution is to choose secure protocols and services. For example, Hypertext Transfer Protocol (HTTP) and File Transfer Protocol (FTP) should not be used to move data to and from cloud services as they pass the data in the clear. HTTP Secure (HTTPS), Secure FTP (SFTP), and other encryption-based transmission should be used so that data in motion may be intercepted but not read.

If you are able to access the cloud service and obtain access to your data anywhere in the world, so can others. The requirement for identification and authentication becomes more important in this public-facing environment. The security of accessing your cloud services over the Internet can be improved in a number of ways including improved passwords, multifactor authentication (MFA), virtual private networks (VPNs), etc. The increased security needs of a system available over the network where security is shared between the CSP and customer makes these additional steps more important.

Multitenancy

One way to get the improved efficiencies of cloud computing is through the sharing of infrastructure. A server may have more than one company purchasing access to its resources. These resources are shared by the tenants. Like an apartment building, these tenants share resources and services but have their own dedicated space. Virtualization allows the appearance of single tenancy in a multitenancy situation. Each tenant's data remains private and secure in the same way that your belongings (data) in an apartment building remain secure and isolated from the belongings (data) of your neighbor.

However, as the building is shared, it is still the responsibility of each tenant to exercise care to maintain the integrity and confidentiality of their own data. If the door is left unsecured, a neighbor could easily enter and take your things. It is also necessary to consider the availability of the data as the actions of another tenant could make your data inaccessible for a time due to no fault of your own. In our example, if another tenant is involved in illegal activity, the entire building could be shut down. Or, if another tenant damaged the building, your access might be reduced or eliminated. A multitenancy environment increases the importance of disaster recovery (DR) and business continuity (BC) planning.

Rapid Elasticity and Scalability

In a traditional computing model, a company would need to buy the infrastructure needed for any future, potential, or anticipated growth. If they estimate poorly, they either will have a lot of excess capacity or will run out of room. Neither situation is optimal. In a cloud solution, the space needed grows and shrinks as necessary to support the customer. If there is a peak in usage or resource needs, the service grows with the needs. When the needs are gone, the resources used decrease. This supports a pay-as-you-go model, where a customer pays only for the resources needed and used.

For the CSP, this presents a challenge. The CSP must have the excess capacity to serve all their customers without having to incur the cost of the total possible resource usage. They must, in effect, estimate how much excess capacity they must have to serve all of their customers. If they estimate poorly, the customer will suffer and the CSP's customer base could decrease.

However, there is a cost to maintaining this excess capacity. The cost must be built into the cost model. In this way, all customers share in the cost of the CSP, maintaining some level of excess capacity. In the banking world, a bank must keep cash reserves of a certain percentage so that they can meet the withdrawal needs of their customers. But if every customer wanted all of their money at the same time, the bank would run out of cash on hand. In the same way, if every customer's potential peak usage occurred at the same time, the CSP would run out of resources, and the customers would be constrained (and unhappy).

The customer must also take care in setting internal limits on resource use. The ease of expanding resource use can make it easy to consume more resources than are truly necessary. Rather than cleaning up and returning resources no longer needed, it is easy to just spin up more resources. If care is not taken to set limits, a customer can find themselves with a large and unnecessary bill for resources “used.”

Resource Pooling

In many ways, this is the core of cloud computing. Multiple customers share a set of resources including servers, storage, application services, etc. They do not each have to buy the infrastructure necessary to provide their IT needs. Instead, they share these resources with each other through the orchestration of the CSP. Everyone pays for what they need and use. The goal is that resources are used efficiently by the group of customers.

This resource pooling presents some challenges for the cybersecurity professional. When resources are pooled, it can lead to multitenancy. A competitor or a rival can be sharing the same physical hardware. If the system, especially the hypervisor, is compromised, sensitive data could be exposed.

Resource pooling also implies that resources are allocated and deallocated as needed. The inability to ensure data erasure can mean that remnants of sensitive files could exist on storage allocated to another user. This increases the importance of data encryption and key management.

Measured Service

Metering service usage allows a CSP to charge for the resources used. In a private cloud, this can allow an organization to charge each department based on their usage of the cloud. For a public cloud, it allows each customer to pay for the resources used or consumed. With a measured service, everyone pays their share of the costs.

The cloud is especially advantageous for organizations with peaks in their resource needs or cycles of usage. For example, a tax preparer uses more resources in the United States in the beginning of the year, peaking on April 15. Many industries have sales dates: Memorial Day, President's Day, Black Friday, Cyber Monday, Arbor Day, etc. Okay, maybe not Arbor Day. Resource needs peak at these times. A company can pay for the metered service for these peak times rather than maintaining the maximum resource level throughout the year. Maintaining the maximum resources in-house would be expensive and a waste of resources.

Building Block Technologies

These technologies are the elements that make cloud computing possible. Without virtualization, there would be no resource pooling. Advances in networking allow for ubiquitous access. Improvements in storage and databases allow remote virtual storage in a shared resource pool. Orchestration puts all the pieces together. The combination of these technologies allows better resource utilization and improves the cost structure of technology. Providing the same resources on-premise can also be accomplished by these technologies, but with lower resource utilization and at a higher cost in many situations. Where costs are not decreased by cloud computing, a case for on-premise resources can be made.

Virtualization

Virtualization allows the sharing of servers. Virtualization is not unique to cloud computing and can be used to share corporate resources among multiple process and services. For example, a service can have VMware installed and run a mail server on one virtual machine (VM) and a web server on another VM, both using the same physical hardware. This is resource sharing.

Cloud computing takes this idea and expands it beyond what most companies are capable of doing. The CSP shares resources among a large number of services and customers (also called tenants). Each tenant has full use of their environment without knowledge of the other tenants. This increases the efficient use of the resources significantly.

In addition, a CSP may have multiple locations. This allows services and data to move seamlessly between locations, improving resource use by the CSP. Services and data can easily be in multiple locations, improving business continuity and fault tolerance. The CSP can use the ease with which virtualization allows the movement of data and services to take advantage of available space and excess capacity, wherever it may be located.

This can create some security and compliance concerns, when data cannot move freely across borders or jurisdictional issues exist. These issues are best handled during contract negotiation. Another concern is if the hypervisor is compromised, as it controls all VMs on a machine. If the hypervisor is compromised, all data can be compromised. The security of the hypervisor is the responsibility of the CSP.

Storage

A variety of storage solutions allow cloud computing to work. Two of these are storage area networks (SANs) and network-attached storage (NAS). These and other advances in storage allow a CSP to offer flexible and scalable storage capabilities.

A SAN provides secure storage among multiple computers within a specific customer's domain. A SAN appears like a single disk to the customer, while the storage is spread across multiple locations. This is one type of shared storage that works across a network.

Another type of networked storage is the NAS. This network storage solution uses TCP/IP and allows file-level access. A NAS appears to the customer as a single file system. This is a solution that works well in a cloud computing environment.

The responsibility for choosing the storage technology lies with the CSP and will change over time as new technologies are introduced. These changes should be transparent to the customer. The CSP is responsible for the security of the shared storage resource.

Shared storage can create security challenges if file fragments remain on a disk after it has been deallocated from one customer and allocated to another. A customer has no way to securely wipe the drives in use, as the customer does not control the physical hardware. However, the use of crypto-shredding can make these fragments unusable if recovered.

Networking

As all resources in a cloud environment are accessed through the network, a robust, available network is an essential element. The Internet is the network used by public and community clouds, as well as many private clouds. This network has proven to be widely available with broad capabilities. The Internet has become ubiquitous in society, allowing for the expansion of cloud-based services.

An IP-based network is only part of what is needed for cloud computing. Low latency, high bandwidth, and relatively error-free transmissions make cloud computing possible. The use of public networks also creates some security concerns. If access to cloud resources is via a public network, like the Internet, the traffic can be intercepted. If transmitted in the clear, the data can be read. The use of encryption and secure transport keeps the data in motion secure and cloud computing safer.

Databases

Databases allow for the organization of customer data. By using a database in a cloud environment, the administration of the underlying database becomes the responsibility of the CSP. They become responsible for patching, tuning, and other database administrator services. The exception is IaaS, where the user is responsible for whatever database they install.

The other advantage of databases offered through a cloud service is the number of different database types and options that can be used together. While traditional relational databases are available, so are other types. By using traditional databases and other data storage tools as well as large amounts of data resources, data warehouses, data lakes, and other data storage strategies can be implemented.

Orchestration

Cloud orchestration is the use of technology to manage the cloud infrastructure. In a modern organization, there is a great deal of complexity. This has been called the multicloud. An organization may contract through the VMO with multiple SaaS services. In addition, they may have accounts with multiple CSPs, such as AWS, IBM Cloud Foundry, and Microsoft Azure. In addition, they may be using public, private, and community clouds.

This complexity could lead to data being out of sync, processes being broken, and the workforce unable to keep track of all the part. Like the conductor of an orchestra, cloud orchestration partners keep all of these pieces working together including data, processes, and application services. Orchestration is the glue that ties all of the pieces together through programming and automation. Orchestration is valuable whether an organization runs a single cloud environment or a multicloud environment.

This is more than simply automating a task here and a task there. However, automation is used by the cloud orchestration service to create one seemingly seamless organizational cloud environment. In addition to hiding much of the complexity of an organization's cloud environment, cloud orchestration can reduce costs, improve efficiency, and support the overall workforce.

The major CSPs provide orchestration tools. These include IBM Cloud Orchestrator, Microsoft's OMS Management Suite, Oracle Cloud Management Solutions, and AWS Cloud Formation. Like all such offerings, they vary considerably in the tools provided and the integration with other vendors' cloud offerings.

DESCRIBE CLOUD REFERENCE ARCHITECTURE

The purpose of a reference architecture (RA) is to allow a wide variety of cloud vendors and services to be interoperable. An RA creates a framework or mapping of cloud computing activities and cloud capabilities to allow the services of different vendors to be mapped and potentially work together more seamlessly. An example of this approach is the seven-layer Open Systems Interconnection (OSI) model of networking, which is used to discuss many networking protocols. As companies are engaging in a wide variety of cloud solutions from multiple vendors, interoperability is becoming more important, and the reference architecture helps make that more easily occur.

The National Institute of Standards and Technology (NIST) provides a cloud computing reference architecture in SP 500-292 as do other organizations. Some models, such as NIST are role based. Other RAs, such as the IBM conceptual reference model, are layer based. The NIST RA is intended to be vendor neutral and defines five roles: cloud consumer, cloud provider, cloud auditor, cloud broker, and cloud carrier.

Cloud Computing Activities

Cloud computing activities in an RA depend on whether the RA is role based or layer based. As an example, the role-based NIST RA will be used to describe cloud computing activities. A similar description could be made for a layer-based model. In a role-based RA, cloud computing activities are the activities of each of the roles. The NIST model includes five roles, with the following types of activities:

  • Cloud consumer: The procurement and use of cloud services. This involves reviewing available services, requesting services, setting up accounts and executing contracts, and using the service. What the activities consist of depends on the cloud service model. For a SaaS consumer, the activities are typical end-user activities such as email, social networks, and collaboration tools. The activities with a PaaS customer center around development activities, business intelligence, and application deployment. IaaS customers focus on activities such as business continuity and disaster recovery, storage, and compute.
  • Cloud provider: The entity that makes a service available. These activities include service deployment, orchestration, and management as well as security and privacy.
  • Cloud auditor: An entity capable of independent examination and evaluation of cloud service controls. These activities are especially important for entities with contractual or regulatory compliance obligations. Audits are usually focused on compliance, security, or privacy.
  • Cloud broker: This entity is involved in three primary activities: aggregation of services from one or several CSPs, integration with existing infrastructure (cloud and noncloud), and customization of services.
  • Cloud carrier: The entity that provides the network or telecommunication connectivity that permits the delivery and use of cloud services.

Cloud Service Capabilities

Capability types are another way to look at cloud service models. In this view, we look at the capabilities provided by each model. Our three service models are SaaS, PaaS, and IaaS. Each provides a different level and type of service to the customer. The shared security responsibilities differ for each type as well.

Application Capability Types

Application capabilities include the ability to access an application over the network from multiple devices and from multiple locations. Application access may be made through a web interface, through a thin client, or in some other manner. As the application and data are stored in the cloud, the same data is available to a user from whichever device they connect from. Depending on the end user, the look of the interface may be different.

Users do not have the capability to control or modify the underlying cloud infrastructure, although they may be able to customize their interface of the cloud solution. What the user gets is a positive experience when working on a laptop or phone. The organization does not have to be concerned with the different types of endpoints in use in their organization (as it relates to cloud service access). Supporting all of the different types of devices is the responsibility of the application service provider.

Platform Capability Types

A platform has the capability of developing and deploying solutions through the cloud. These solutions may be developed with available tools, they may be acquired solutions that are delivered through the cloud, or they may be solutions that are acquired and customized prior to delivery. The user of a platform service may modify the solutions they deploy, particularly the ones they develop and customize. However, the user has no capability to modify the underlying infrastructure.

What the user gets in a platform service are tools that are specifically tailored to the cloud environment. In addition, the user can experiment with a variety of platform tools, methods, and approaches to determine what is best for a particular organization or development environment without the expense of acquiring all those tool and the underlying infrastructure costs. It provides a development sandbox at a lower cost than doing it all in house.

Infrastructure Capability Types

An infrastructure customer cannot control the underlying hardware but has control over the operating system, installed tools, solutions installed, and provisioning of infrastructure compute, storage, and network and other computing resources.

This capability provides the customer with the ability to spin up an environment quickly. The environment may be needed for only hours or days. The parent organization does not have to purchase the hardware or physical space for this infrastructure or pay for its setup and continuing maintenance for usage spikes, temporary needs, or even regular cycles of use.

Cloud Service Categories

There are three primary cloud service categories: SaaS, PaaS, and IaaS. In addition, other service categories are sometimes suggested, such as storage as a service (STaaS), database as a service (DBaaS), and even everything as a service (XaaS). However, these can be described in terms of the three basic types and have not caught on in common usage. They are most often used in marketing.

Security of systems and data is a shared responsibility between the customer and service provider. The point at which responsibilities of the service provider end and the responsibilities of the customer begin depends on the service category.

When talking about SaaS, PaaS, or IaaS solutions, we must know which service model is being discussed. Each is discussed in some detail next. Which model you are referring to is in part determined by where in the process you are.

If you are an end user, you are likely using a SaaS solution. If you are a developer, you may be offering a SaaS solution you developed in-house or through the use of a PaaS development environment. It is possible that the cloud service you provide is a development environment, so you offer a PaaS service you built on an IaaS service. Some customers work at all three levels. They use an IaaS service to build a development environment to create a SaaS solution. In each case, the security responsibilities are shared, as described elsewhere, by the customer and the CSP. However, that shared responsibility can become rather complex if the customer uses multiple services at differing service levels.

Software as a Service

SaaS is the most common cloud service that most people have experience with. This is where we find the end user, which at times is each of us. If you have shared a file through Google Docs, stored a file on Dropbox, signed a document using DocuSign, or created a document with Office 365, you have used a SaaS solution. They are usually subscription-based services and are easy to set up and use. Corporations often negotiate and purchase a site license. The amount of control over security will vary by the CSP and the size of the contract.

Platform as a Service

PaaS is the domain of developers. With a PaaS solution, the service provider is responsible for infrastructure, networking, virtualization, compute, storage, and operating systems. Everything built on top of that is the responsibility of the developer and their organization. Many PaaS service providers offer tools that may be used by the developers to create their own applications. How these tools are used and configured are the responsibility of the developers and their organizations.

With a PaaS solution, a developer can work from any location with an Internet connection. The developer's organization no longer has to provide the servers and other costly infrastructure needed. This can be especially useful when testing new solutions and developing experimental ideas. In addition, the CSP provides patching and updates for all services provided. Major CSPs offer PaaS solutions.

Infrastructure as a Service

IaaS is where we find the system administrators (SysAdmins). In a typical IaaS offering, the IaaS service provider is responsible for the provisioning of the hardware, networking, and storage, as well as any virtualization necessary to create the IaaS environment. The SysAdmin is responsible for everything built on top of that, including the operating system, developer tools, and end-user applications as needed.

The IaaS service may be created to handle resource surge needs, to create a development environment for a distributed DevOps team, or even to develop and offer SaaS products.

Cloud Deployment Models

There are three cloud deployment models and one hybrid model. The hybrid model is a combination of any two or more other deployment models. Each deployment model has advantages and disadvantages. A cloud deployment model tells you who owns the cloud and who can access the cloud—or at least, who controls access to the cloud. The deployment model may also tell you something about the size of the cloud.

Public Cloud

In a public cloud, anyone with access to the Internet may access the resources provided, usually through a subscription-based service. The resources and application services are provided by third-party service providers, and the systems and data reside on third-party servers. For example, Dropbox provides a file storage product to end users. The details of how Dropbox provides this service are for the business to determine. For the customer, it is simply a publicly available cloud service.

There are concerns with privacy and security in a public cloud. And, while that may have been the case in the past, public clouds have made great strides in both privacy and security. The responsibility for both—data privacy and security—remains with the data owner (customer). Concerns about reliability can sometimes be handled contractually through the use of an service-level agreement (SLA). However, for many public cloud services, the contractual terms are fixed for both individual or corporate accounts.

Concerns also exist for vendor lock-in and access to data if the service provider goes out of business or is breached. The biggest drawback may be in customization. A public cloud provides those services and tools it determines will be profitable, and the customer often must choose from among the options provided. Each cloud service provider has a varied set of tools.

Private Cloud

A private cloud is built in the same manner as a public cloud, architecturally. The difference is in ownership. A private cloud belongs to a single company and contains data and services for use by that company. There is not a subscription service for the general public. In this case, the infrastructure may be built internally or hosted on third-party servers.

A private cloud is usually more customizable, and the company controls access, security, and privacy. A private cloud is also generally more expensive. There are no other customers to share the infrastructure costs. With no other customers, the cost of providing excess capacity is not shared.

A private cloud may not save on infrastructure costs, but it provides cloud services to the company's employees in a more controlled and secure fashion. The major cloud vendors provide both a public cloud and the ability for an organization to build a private cloud environment.

The primary advantage to a private cloud is security. With more control over the environment and only one customer, it is easier to avoid the security issues of multitenancy. And when the cloud is internal to the organization, a secure wipe of hardware becomes a possibility.

Community Cloud

A community cloud falls somewhere between public and private clouds. The cloud is built for the needs of multiple organizations, all in the same industry. These common industries might be banks; governments such as a group of states; or resources shared between local, county (or parish), and state governments. Universities often set up consortiums for research, and this can be facilitated through a community cloud. Structured like public and private clouds, the infrastructure may be hosted by one of the community partners or by a third-party. Access is restricted to members of the community and may be subscription based.

While a community cloud can facilitate data sharing among similar entities, each remains independent and is responsible for what it shares with others. As in any other model, the owner of the data remains responsible for its privacy and security, sharing only what is appropriate, when it is appropriate.

Hybrid Cloud

A hybrid cloud can be a combination of any of the other cloud deployment models but is usually a combination of the private and public cloud deployment models and can be used in ways that enhance security when necessary and allows scalability and flexibility.

When an organization has highly sensitive information, the additional cost of a private cloud is warranted. The private cloud provides the access, resource pooling, and other benefits of a cloud deployment in a more secure fashion.

However, an organization will also have less sensitive information (e.g., email, memos, and reports). In most cases, the amount of this data is much larger. A public cloud can provide the benefits of cloud computing in a cost-effective manner for this less sensitive data. As most of an organization's data is usually of the less sensitive type, the cost savings of a public cloud realized can be substantial, while protecting the more sensitive data in the private cloud. The overall cost savings remains, and the benefits of cloud computing are realized.

In a hybrid model, the disadvantages and benefits of each type of cloud deployment remains for the portion of the cloud using that deployment model. Cloud orchestration can be used to keep this hybrid cloud manageable for the workforce to use.

Cloud Shared Considerations

All cloud customers and CSPs share a set of concerns or considerations. It is no longer the case that all companies use a single CSP or SaaS vendor. In fact, larger companies may use multiple vendors and two or more CSPs in their delivery of services. The business choice is to use the best service for a particular use (best being defined by the customer based on features, cost, or availability). The sections that follow discuss some major considerations that allow the use of multiple CSPs and vendors, in support of the complex cloud environment that exists.

Interoperability

With the concern over vendor lock-in, interoperability is a primary consideration. Interoperability creates the ability to communicate with and share data across multiple platforms and between traditional and cloud services provided by different vendors. Avoiding vendor lock-in allows the customer to make decisions based on the cost, feature set, or availability of a particular service regardless of the vendor providing the service. Interoperability leads to a richer set of alternatives and more choices in pricing.

Portability

Portability may refer to data portability or architecture portability. Data portability is focused on the ability to move data between traditional and cloud services or between different cloud services without having to port the data under challenging and lossy methods or significant changes to either service or the loss of metadata.

Data portability matters to an organization that uses a multicloud approach, as data moves between vendors. Each move cannot create a data porting exercise, or it is not seamless or useful. It is also important in a loud bursting scenario, where peak usage expands into a cloud environment and then shrinks back to its original noncloud size. This must be seamless to make the strategy useful. Data backups are increasingly to the cloud, and a restore to in-house servers must be handled easily.

Architecture portability is concerned with the ability to access and run a cloud service from a wide variety of devices, running different operating systems. This allows users on a Windows laptop and a MacBook Pro to use the same application services, share the same data, and collaborate easily.

Reversibility

Reversibility is a measure of the extent your cloud services can be moved from one cloud environment to another. This includes moving between a cloud environment and an on-premise traditional environment. The movement between environments must be simple and automatic. Companies now move to and from the cloud and between clouds in a multicloud environment and when cloud bursting.

The movement between environments needs to be secure or the movement is not simple nor low cost. Reversibility also decreases vendor lock-in as solutions need to be able to move between CSPs and to and from the cloud. It will become important as application software and data will eventually reside in different locations and the mature cloud environment will not care.

Availability

Availability has two components. The first is one leg of the CIA triad. Within the constraints of the agreed-upon SLA, the purchased services and company or individual data must be made available to the customer by the CSP. If the SLA is not met, the contract will spell out the penalties or recourses available. In this example, if a customer has paid for Dropbox, but when they try to access the service, it is not available, the service availability fails. If this failure is not within the requirements of the SLA, the customer has a claim against the service provider.

The second component of availability is concerned with the elasticity and scalability of the cloud service. If the CSP has not properly planned for expansion, a customer may need to grow their use of the contracted service, and the resources may not be available. Consider a service like Dropbox. If the customer pays for 2TB of storage and it is not available, when they need it, the service fails in terms of availability, even if access to files already stored with the service continues to be provided.

Security

Cloud security is a challenging endeavor. It is true that the larger CSPs spend resources and focus on creating a secure environment. It is equally true that a large CSP is a large target, and there are aspects of cloud computing, such as multitenancy, that create new complexities to security.

One issue that is part of various national laws such as the European Union's General Data Protection Regulation is the restriction on cross-border transfers of data. In an environment where the actual hardware could be anywhere, it is an important consideration to know where your data resides. When there are law enforcement issues, location of the data may also be a jurisdictional challenge.

The owner of data remains ultimately responsible for the security of the data, regardless of what cloud or noncloud services are used. Cloud security involves more than protection of the data but includes the applications and infrastructure.

Privacy

The involvement of third-party providers, in an off-premises situation, creates challenges to data protection and privacy. The end user cannot always determine what controls are in place to protect the privacy of their data and must rely on privacy practice documents and other reports to determine if they trust the third party to protect their data privacy.

Privacy concerns include access to data both during a contract and at the end of a contract as well as the erasure or destruction of data when requested or as required within the contract. Regulatory and contractual requirements such as HIPAA and PCI are also key concerns. Monitoring and logging of data access and modification, and the location of data storage, are additional privacy concerns.

Resiliency

Resilience is the ability to continue operating under adverse or unexpected conditions. This involves both business continuity and disaster recovery planning and implementation. Business continuity might dictate that a customer stores their data in multiple regions so that a service interruption in one region does not prevent continued operations.

The cloud also provides resiliency when a customer suffers a severe incident such as weather, facilities damage, terrorism, civil unrest, or similar events. A cloud strategy allows the company to continue to operate during and after these incidents. The plan may require movement of personnel or contracting personnel at a new location. The cloud strategy handles the data and processes as these remain available anywhere network connectivity exists.

Major CSPs use multiple regions and redundancy to increase the ability of a recovery. Many organizations plan a resilient strategy that includes internal resources and the capabilities of the cloud.

Performance

Performance is measured through an SLA. Performance of a cloud service is generally quite high as major CSPs build redundancy into their systems. The major performance concerns are network availability and bandwidth. A network is a hard requirement of a cloud service, and if the network is down, the service is unavailable. In addition, if you are in an area of limited bandwidth, performance will be impacted.

Governance

Cloud governance uses the same mechanisms as governance of your on-premises IT solutions. This includes policies, procedures, and controls. Controls include encryption, access control lists (ACLs), and identity and access management. As many organizations have cloud services from multiple vendors, a cloud governance framework and application can make the maintenance and automation of cloud governance manageable. This may be another cloud solution.

A variety of governance solutions, some cloud based, exist to support this need. Without governance, cloud solutions can easily grow beyond what can be easily managed. For example, a company may want to govern the number of CSP accounts, the number of server instances, the amount of storage utilized, the size of databases, and other storage tools. Each of these add to the cost of cloud computing. A tool that tracks usage and associated costs will help an organization use the cloud efficiently and keep its use under budget.

Maintenance and Versioning

Maintenance and versioning in a cloud environment have some advantages and disadvantages. Each party is responsible for the maintenance and versioning of their portion of the cloud stack. In a SaaS solution, the maintenance and versioning of all parts is the responsibility of the CSP, from the hardware to the SaaS solution. In a PaaS solution, the customer is responsible for the maintenance and versioning of the applications they acquire and develop. The platform and tools provided by the platforms, as well as the underlying infrastructure, are the responsibility of the CSP. In an IaaS solution, the CSP is responsible for maintenance and versioning of hardware, network and storage, and the virtualization software. The remainder of the maintenance and versioning is the responsibility of the customer.

What this means in practical terms is that updates and patches in a SaaS or PaaS environment may occur without the knowledge of the customer. If properly tested before being deployed, it will also be unnoticed by the customer. There remains the potential for something to break when an update or patch occurs, as it is impossible to test every possible variation that may exist in the cloud environment of the customers. This is true in a traditional on-premise environment as well. In an IaaS environment, the customer has much more control over patch and update testing and deployment.

On the positive side, there will not be the endpoints that exist in every organization that never get updated and have older, insecure versions of potentially unlicensed software. When connecting to the cloud service, the customer will always be using the newest, most secure version of the solution in a SaaS solution.

In a PaaS or IaaS, the customer is responsible for some of the maintenance and versioning. However, each customer that connects to the PaaS and IaaS environment will be accessing the most current version provided. The maintenance and versioning are simplified by restricting the maintenance and versioning to the cloud environment. It is not necessary to update each endpoint running a particular piece of software. Everyone connecting to the cloud is running the same version, even if it is old and has not been updated.

Service Levels and Service Level Agreements

Contractually, an SLA specifies the required performance parameters of a solution. This negotiation will impact the price, as more stringent requirements can be more expensive. For example, if you need 24-hour support, this will be less expensive than 4-hour support.

Some CSPs will provide a predefined set of SLAs, and customers choose the level of service they need. The customer can be an individual or an organization. For the customer contracting with a CSP, this is a straightforward approach. The CSP publishes their performance options and the price of each, and the customer selects the one that best suits their needs and resources.

In other cases, a customer specifies their requirements, and the CSP will provide the price. If the CSP cannot deliver services at the level specified or if the price is more than the customer is willing to pay, the negotiation continues. Once agreed upon, the SLA becomes part of the contract. This is generally true only for large customers. The cost of negotiating and customizing an SLA and the associated environment is not generally cost effective for smaller contracts and individuals.

Auditability

A cloud solution needs to be auditable. This is an independent examination of the cloud services controls, with the expression of an opinion on their function with respect to their purpose. Are the controls properly implemented? Are the controls functioning and achieving their goal? These are the questions of an auditor.

A CSP will rarely allow a customer to perform on audit on their controls. Instead, independent third parties will perform assessments that are provided to the customer. Some assessments require a nondisclosure agreement (NDA), and others are publicly available. These include SOC reports, vulnerability scans, and penetration tests.

Regulatory

Proper oversight and auditing of a CSP makes regulatory compliance more manageable. A regulatory environment is one where a principle or rule controls or manages an organization. Governance of the regulatory environment is the implementation of policies, procedures, and controls that assist an organization in meeting regulatory requirements.

One form of regulations are those governmental requirements that have the force of law. The Health Insurance Portability and Accountability Act (HIPAA), Gramm-Leach-Bliley Act (GLBA), Sarbanes-Oxley Act (SOX) in the United States, and GDPR in the European Union are examples of laws that are implemented through regulations and have the force of law. If any of these apply to an organization, governance will put a framework in place to ensure compliance with these regulations.

Another form of regulations is those put in place through contractual requirements. An SLA takes the form of a contractual obligation as do the rules associated with credit and debit cards through the Payment Card Industry Data Security Standard (PCI DSS). Enforcement of contractual rules can be through the civil courts governing contracts. Governance must again put in place the framework to ensure compliance.

A third form of regulations is found through standards bodies like International Organization for Standardization (ISO) and NIST as well as nongovernmental groups such as the Cloud Security Alliance and the Center for Internet Security. These organizations make recommendations and provide best practices in the governance of security and risk. These support improved security and risk management. While this form of regulation does not usually have the force of law, an organization or industry may voluntarily choose to be regulated by a specific set of guidelines. For example, U.S. federal agencies are required to follow NIST requirements. If an organization or industry chooses to follow a set of guidelines under ISO, NIST, or other group, they must put the governance framework in place to ensure compliance. While often voluntary, once an organization chooses to follow these guidelines, the governance process ensures the organization complies with these regulations.

Impact of Related Technologies

The technologies in this section may be termed transformative technologies. Without them, the cloud computing still works and retains its benefits. These transformative technologies either improves your capabilities in the cloud or expands the capabilities and benefits of cloud computing. In the following sections, the specific use cases for the technology will be described.

Machine Learning

Machine learning (ML) is a key component of artificial intelligence (AI) and is becoming more widely used in the cloud. Machine learning creates the ability for a solution to learn and improve without the use of additional programming. Many of the CSPs provide ML tools. There is some concern and regulatory movement when ML makes decisions about individuals without the involvement of a person in the process.

The availability of large amounts of inexpensive data storage coupled with vast amounts of computing power increases the effectiveness of ML. A data warehouse, or even a data lake, can hold amounts of data that could not be easily approached before. ML tools can mine this data for answers to questions that could not be asked before because of the computing power required. This capability has the potential to transform how we use data and the answers we can extract from our data.

The security concern has to do with both the data and the processing. If all of your data is available in one large data lake, access to the data must be tightly controlled. If your data store is breached, all of your data is at risk. Controls to protect the data at rest and access to this data are crucial to make this capability safe for use.

The other concern is with how the data is used. More specifically, how will it impact the privacy of the individuals whose data is in the data store? Will questions be asked where the answers can be used to discriminate against groups of people with costly characteristics? Might insurance companies refuse to cover individuals when the health history of their entire family tree suggests they are an even greater risk than would be traditionally believed?

Governmental bodies and Non-Governmental Organizations (NGOs) are addressing these concerns to some degree. For example, Article 22 of the EU GDPR has a prohibition on automated decision-making, which often involves ML, when that decision is made without human intervention if the decision has a significant impact on the individual. For example, a decision on a mortgage loan could involve ML. The final loan decision cannot be made by the ML solution. A human must review the information and make the final decision.

Artificial Intelligence

Machine learning is not the only AI technology. The goal of AI is to create a machine that has the capabilities of a human and cannot be distinguished from a human. It is possible that AI could create intelligent agents online that are indistinguishable to human agents. This has the potential to impact the workforce, particularly in the lower skill areas. There is also concern about how agents could be manipulated to affect consumer behavior and choices. An unethical individual could use these tools to impact humanity. Safeguards in the technology and legal protections will need to be in place to protect the customers.

With the vast amount of data in the cloud, the use of AI is a security and privacy concern beyond the data mining and decision-making of ML. This greater ability to aggregate and manipulate data through the tools created through AI research creates growing concerns over security and privacy of that data and the uses that will be devised for this data.

These concerns and trends will continue to be important over the next several years.

Blockchain

Blockchain is similar to cloud computing, with some significant differences. A blockchain is an open distributed ledger of transactions, often financial, between two parties. This transaction is recorded in a permanent and verifiable manner. The records, or blocks, are linked cryptographically and are distributed across a set of computers, owned by a variety of entities.

Blockchain provides a secure way to perform anonymous transactions that also maintain nonrepudiation. The ability to securely store a set of records across multiple servers, perhaps in different CSPs or on-premise, could lead to new and powerful storage approaches. Any data transaction would be committed to the chain and could be verifiable and secure. Blockchain technology pushes the boundaries of cryptographic research in ways that support secure distributed computing.

In cloud computing, the data may be owned by a single entity. But, the ability to securely store this data across CSPs would open new storage methods and would lead to less vendor lock-in. Each data node could be in any location, on any server, within any CSP or on-premise, where each node in the data chain is not important. While not every record in the cloud is the result of a financial transaction, all data records are the result of some transaction.

Other improvements in the use of cryptography to link records in an immutable manner or improvements in the techniques used to distribute records across multiple servers would benefit both blockchain and cloud computing.

Internet of Things

With the growth of the Internet of Things (IoT), a great deal of data is being generated and stored. The cloud is a natural way to store this data. Particularly for large organizations, with IoT devices such as thermostats, cameras, irrigation controllers, and similar devices, the ability to store, aggregate, and mine this data in the cloud from any location with a network connection is beneficial.

The manufacturers of many IoT devices do not even consider the cybersecurity aspects of these devices. To an HVAC company, a smart thermostat may simply be a thermostat. These devices can be in service for many years and never have a firmware update. Patches and security updates are simply not installed, and these devices remain vulnerable.

It is not the data on the device that is always the target. The device may become part of a botnet and used in a DDoS attack. Cameras and microphones can be used to surveil individuals. Processes controlled by IoT devices can be interrupted in ways that damage equipment (e.g., Stuxnet) or reputations.

Few organizations are sufficiently mature to really protect IoT devices. This makes these devices more dangerous because they are rarely monitored. The cloud provides the ability to monitor and control a large population of devices from a central location. For some devices, such as a thermostat, this may be a small and acceptable risk. However, audio and visual feeds raise privacy, security, and safety concerns that must be addressed.

Containers

Virtualization is a core technology in cloud computing. It allows resource pooling, multitenancy, and other important characteristics. Containers are one approach to the virtualization. In a traditional virtualization environment, the hypervisor sits atop the host OS. The VM sits atop the hypervisor. The VM contains the guest OS and all files and applications needed in that VM. A machine can have multiple VMs, each running a different machine.

In containerization, there is no hypervisor and no guest OS. A container runtime sits above the host OS, and then each container uses the container runtime to access needed system resources. The container contains the files and data necessary to run, but no guest OS. The virtualization occurs higher in the stack and is generally smaller and can start up more quickly. It also uses fewer resources by not needing an additional OS in the virtual space. The smaller size of the container image and the low overhead are the primary advantages of containers over traditional virtualization.

Containers make a predictable environment for developers and can be deployed anywhere the container runtime is available. Similar to the Java Virtual Machine, a runtime is available for common operating systems and environments. Containers can be widely deployed. This improves portability by allowing the movement of containers from one CSP to another. Versioning and maintenance of the underlying infrastructure do not impact the containers as long as the container runtime is kept current.

The container itself is treated like a privileged user, which creates security concerns that must be addressed. Techniques and servers exist to address each of these security concerns such as a Cloud Access Security Broker (CASB). Security concerns exist and must be carefully managed. All major CSPs support some form of containerization.

Quantum Computing

Quantum computers use quantum physics to build extremely powerful computers. When these are linked to the cloud, it becomes quantum cloud computing. IBM, AWS, and Azure all provide a quantum computing service to select customers. The increased power of quantum computers and the use of the cloud may make AI and ML more powerful and will allow modeling of complex systems available on a scale never seen before. Quantum cloud computing has the ability to transform medical research, AI, and communication technologies.

A concern for quantum computing is that traditional methods for encryption/decryption could become obsolete as the vast power of the cloud coupled with quantum computing makes the search space more manageable. This would effectively break current cryptographic methods. New quantum methods of encryption would be necessary or methods not susceptible to quantum computing.

UNDERSTAND SECURITY CONCEPTS RELEVANT TO CLOUD COMPUTING

Security concepts for cloud computing mirror the same concepts in on-premises security, with some differences. Most of these differences are related to the customer not having access to the physical hardware and storage media. These concepts and concerns will be discussed in the following sections.

Cryptography and Key Management

Cryptography is essential in the cloud to support security and privacy. With multitenancy and the inability to securely wipe the physical drive used in a CSP's data center, information security and data privacy are more challenging, and the primary solution is cryptography.

Data at rest and data in motion must be securely encrypted. A customer will need to be able to determine whether a VM or container has been unaltered after deployment, requiring cryptographic tools. Secure communications are essential when moving data and processes between CSPs as well as to and from on-premise users. Again, cryptography is the solution.

One of the challenges with cryptography has always been key management. With many organizations using a multicloud strategy, key management becomes even more challenging. The questions to answer are

  • Where are the keys stored?
  • Who manages the keys (customer or CSP)?
  • Should a key management service be used?

In a multicloud environment, there are additional concerns:

  • How is key management automated?
  • How is key management audited and monitored?
  • How is key management policy enforced?

The power of a key management service (KMS) is that many of these questions are answered.

The KMS stores keys separately from the data. One benefit of encrypting data at rest is that many data breach laws provide an exemption if the data is encrypted securely. This benefit disappears if the encryption/decryption keys are stored with the data. So, if keys are to be stored in the cloud, they must be stored separately from the data. Outsourcing this has the benefit of bringing that expertise to the organization. However, like any outsourcing arrangement, you cannot turn it over to the KMS and forget about it. Someone still needs to oversee the KMS.

Using a KMS does not mean that you turn over the keys to another organization any more than using a cloud file repository gives away your data to the service storing your files. You choose the level of service provided by the KMS to fit your organization and needs.

The last three questions—automation, monitoring and auditing, and policy enforcement—are the questions to keep in mind when reviewing the different KMSs available. Like any other service, the features and prices vary, and each organization will have to choose the best service for their situation. A number of CSPs offer cryptographic KMSs. This KMS makes a multicloud environment scalable.

Access Control

There are three types of access control. These are physical access control, technical access control, and administrative access control. In a shared security model, the CSP and the customer have different responsibilities.

Physical access control refers to actual physical access to the servers and data centers where the data and processes of the cloud customer are stored. Physical access is entirely the responsibility of the CSP. The CSP owns the physical infrastructure and the facilities that house the infrastructure. Only they can provide physical security.

Administrative access control refers to the policies and procedures a company uses to regulate and monitor access. These policies include who can authorize access to a system, how system access is logged and monitored, and how frequently access is reviewed. The customer is responsible for determining policies and enforcing those policies as related to procedures for provisioning/deprovisioning user access and reviewing access approvals.

Technical access control is the primary area of shared responsibility. While the CSP is responsible for protecting the physical environment and the company is responsible for the creation and enforcement of policies, both the customer and the CSP share responsibilities for technical access controls.

For example, a CSP may be willing to federate with an organization's identity and access management (IAM) system. The CSP is then responsible for the integration of the IAM system, while the customer is responsible for the maintenance of the system. If a cloud IAM system is used (provided by the CSP or a third party), the customer is responsible for the provisioning and deprovisioning of users in the system and determining access levels and system authorizations while the CSP or third-party maintains the IAM system.

Logging system access and reviewing the logs for unusual activity can also be a shared responsibility, with the CSP or third-party IAM provider logging access and the customer reviewing the logs or with the CSP providing both services. Either choice requires coordination between the customer and the CSP. Access attempts can come from a variety of devices and locations throughout the world, making IAM an essential function.

Data and Media Sanitization

Internally, it is possible to sanitize storage media as you have physical access to the media. You determine the manner of sanitization to include physical destruction of the storage media. You also determine the schedule for data deletion and media sanitization.

In the cloud this becomes more challenging. The data storage is shared and distributed, and access to the physical media is not provided. The CSP will not allow you access to the physical disks and will certainly not allow their destruction. In addition, data in the cloud is regularly moved and backed up. It may be impossible to determine if all copies of a data item have been deleted. This is a security and privacy concern. The customer will never have the level of control for data and media sanitization that they had when they had physical access and ownership of the storage hardware.

While some CSPs provide access to wipeable volumes, there is no guarantee that the wipe will be done to the level possible with physical access. Encrypted storage of data and crypto-shredding are discussed in the following sections. While not the same as physical access and secure wipe, they provide a reasonable level of security. If, after review, this level of security is not adequate for an organization's most sensitive data, this data should be retained on-premise in customer data centers or on storage media under the direct physical control of the customer.

Overwriting

Overwriting of deleted data occurs in cloud storage over time. Deleted data areas are marked for reuse, and eventually this area will be allocated to and used by the same or another customer, overwriting the data that is there. There is no specific timetable for overwriting, and the data or fragments may continue to exist for some time. Encryption is key in keeping your data secure and the information private. Encrypting all data stored in the cloud works only if the cryptographic keys are inaccessible or securely deleted.

Cryptographic Erase

Cryptographic erasure is an additional way to prevent the disclosure of data. In this process, the cryptographic keys are destroyed (crypto-shredding), eliminating the key necessary for decryption of the data. Like data and media sanitization and overwriting, encryption is an essential step in keeping your data private and secure. Secure deletion of cryptographic keys makes data retrieval nearly impossible.

Network Security

Broad network access is a key component of cloud computing. However, if you have access to cloud resources over the network, bad actors can also have access. Bad actors threaten the security of the cloud service you are using and can threaten the privacy and security of your data.

There are a number of ways to provide network security. This list is not exhaustive, and the concepts are not mutually exclusive. Network security starts with controlling access to cloud resources through IAM, discussed previously. By controlling access to the cloud resources, we limit their exposure. We may also limit their exposure to the public Internet through VPNs and cloud gateways. The use of VPNs for Internet security is common. Cloud gateways, ingress and egress monitoring, network security groups, and contextual-based security are discussed next. These are major topics within cloud network security, but are not exhaustive in their coverage. New methods are regularly developed to improve network security as vulnerabilities and threats are constantly changing.

Network Security Groups

Security remains an important concern in cloud computing. A network security group (NSG) is one way of protecting a group of cloud resources. The NSG provides a set of security rules or virtual firewall for those resources. The NSG can apply to an individual VM, a network interface card (NIC) for that VM, or even a subnet. The NSG is essentially a layer around the VM, subnet, or other cloud resource, as part of a layered defense strategy. This gives the customer some additional control over security.

Cloud Gateways

A cloud gateway provides a level of security by keeping communication between the customer and the CSP off the public Internet. AWS regions can be connected and the traffic can be routed to any region while staying within the CSP environment.

Contextual-Based Security

Contextual-based security uses context to help secure the enterprise and, in the case of cloud computing, the cloud resources. Context includes things such as identity, determined through the IAM system, location, time of days, or endpoint type. This is more than the heuristics used to determine if unusual behavior is occurring. The context can determine the level of access and what resources may be accessed. For example, connecting from the corporate network, through a VPN or from public WiFi may provided different levels of access. If a user attempts to access with an endpoint device that is not registered to that use, access may be blocked entirely.

Ingress and Egress Monitoring

Cloud ingress and egress must be carefully monitored. Security is provided by limiting the number of ingress/egress points available to access resources and then monitoring them. This is similar to a castle with a single entrance. It is easier to control access and prevent access by bad actors when the way in and out is carefully defined and controlled.

Ingress controls can block all or some external access attempts from the public Internet. Inbound connections can be limited to those that are in response to a request initiated from within the cloud resource. This limits connections to the Internet to only those requests initiated in the cloud environment or wanted by the cloud environment.

Egress controls are a way to prevent internal resources from connecting to unapproved and potentially dangerous locations on the Internet. If infected, egress monitoring may prevent malware for contacting their command and control locations. Monitoring what data leaves the environment can assist only in data loss prevention.

Virtualization Security

Virtualization is an important technology in cloud computing. It allows for resource sharing and multitenancy. With these benefits come security concerns. Security of the virtualization method is crucial. The two primary methods of virtualization are VMs created and managed through a hypervisor and virtualization through containers.

Hypervisor Security

A hypervisor, such as Hyper-V or vSphere, packages resources into a VM. Creating and managing the VM are both done through the hypervisor. For this reason, it is important that the hypervisor be secure. Hypervisors such as Hyper-V, VMware EXSi, or Citrix XenServer are type I hypervisors or native hypervisors that run on the host's hardware.

A type I hypervisor is faster and more secure but is more difficult to set up than type II hypervisors, such as VMware or VirtualBox, which sit on top of the operating system. These are easier to set up but less secure.

A hypervisor is a natural target of malicious users as they control all the resources used by each VM. If a hacker compromises another tenant on the server you are on and can compromise the hypervisor, they may be able to attack other customers through the hypervisor. Hypervisor vendors are continually working to make their products more secure.

For the customer, security is enhanced by controlling admin access to the virtualization solution, designing security into your virtualization solution, and securing the hypervisor. All access to the hypervisor should be logged and audited. Access to the network should be limited for the hypervisor to only the necessary access. This traffic should be logged and audited. Finally, the hypervisor must remain current, with all security patches and updates applied as soon as is reasonable. More detailed security recommendations are published in NIST SP 800-125A Rev 1 and by hypervisor vendors.

Container Security

Containerization, such as through Docker or LXC, has many benefits and some vulnerabilities. These include resource efficiency, portability, easier scaling, and agile development. Containerization also improves security by isolating the cloud solution and the host system. Security risks occur through inadequate identity and access management and through misconfigured containers. Software bugs in the container software can also be an issue. The isolation of the container from the host system does not mean that security of the host system can be ignored.

The security issues of containerization must first be addressed through education and training. Traditional DevOps practices and methodologies do not always translate to secure containerization. The use of specialized container operating systems is also beneficial as it limits the capabilities of the underlying OS to those functions a container may need. Much like disabling network ports that are unused, limiting OS functionality decreases the attack surface. Finally, all management and security tools used must be designed for containers. A number of cloud-based security services are available.

There are many containerization solutions provided by major CSPs. One can easily find articles that extoll the virtues of one solution over another. As with other areas of technology, which is best is often a matter of who you ask. Determining which solution is best for your organization requires comparing costs and features.

Common Threats

Previous sections dealt with threats that are related to the specific technologies that are key parts of cloud computing, such as virtualization, media sanitization, and network security. However, all other threats that may attack traditional services are also of concern. Controls that are used to protect access to software solutions, data transfer and storage, and identity and access control in a traditional environment must be considered in a cloud environment as well.

UNDERSTAND DESIGN PRINCIPLES OF SECURE CLOUD COMPUTING

As processes and data move to the cloud, it is only right to consider the security implications of that business decision. Cloud computing is as secure as it is configured to be. With careful review of CSPs and cloud services, as well as fulfilling the customer's shared responsibilities for cloud security, the benefits of the cloud can be obtained securely. The following sections discuss methods and requirements that help the customer work securely in the cloud environment.

Cloud Secure Data Lifecycle

As with all development efforts, the best security is the security that is designed into a system. The cloud secure data lifecycle can be broken down into six steps or phases.

  • Create: This is the creation of new content or the modification of existing content.
  • Store: This generally happens at creation time. This involves storing the new content in some data repository, such as a database or file system.
  • Use: This includes all the typical data activities such as viewing, processing, and changing.
  • Share: This is the exchange of data between two entities or systems.
  • Archive: Data is no longer used but is being stored.
  • Destroy: Data has reached the end of its life, as defined in a data retention policy or similar guidance. It is permanently destroyed.

At each of these steps in the data's lifecycle, there is the possibility of a data breach or data leakage. The general tools for preventing these are encryption and the use of data loss prevention (DLP) tools.

Cloud-Based Disaster Recovery and Business Continuity Planning

A business continuity plan (BCP) is focused on keeping a business running following a disaster such as weather, civil unrest, terrorism, fire, etc. The BCP may focus on critical business processes necessary to keep the business going while disaster recovery takes place. A disaster recovery plan (DRP) is focused on returning to normal business operations. This can be a lengthy process. The two plans work together.

In a BCP, business operations must continue, but they often continue from an alternate location. So, the needs of BCP include space, personnel, technology, process, and data. The cloud can support the organization with many of those needs. A cloud solution provides the technology infrastructure, processes, and data to keep the business going.

Availability zones in a region are independent data centers that protect the customer from data center failures. Larger CSPs like AWS, Azure, and Google define regions. Within a region, latency is low. However, a major disaster could impact all the data centers in a region and eliminate all availability zones in that region. A customer can set up their plan to include redundancy across a single region using multiple availability zones, or redundancy across multiple regions to provide the greatest possible availability of your necessary technology, processes, and data.

One drawback of multiregion plans is that the cost grows quickly. For this reason, many organizations only put their most critical data—the core systems that they cannot operate the business without—across two or more regions, but less critical processes and data may be stored in a single region. Functions and data that are on-premise may also utilize cloud backups. But they may not be up and running as quickly as the cloud-based solutions. The business keeps operating, although not all business processes may be enabled.

DRPs rely heavily on data backups. A DRP is about returning to normal operations. And returning the data to the on-premise environment is part of that. After the on-premise infrastructure has been rebuilt, reconfigured, or restored, the data must be returned.

One failure of many DRPs is the lack of an offsite backup or the ability to quickly access that data backup. In the cloud, a data backup exists in the locations (regions or availability zones) you specify and is available from anywhere network access is available. A cloud-based backup works only if you have network access and sufficient bandwidth to access that data. That must be part of the DRP, along with an offsite data backup. A physical, local backup can also be beneficial. Not every disaster destroys the workplace.

Cost-Benefit Analysis

Cloud computing is not always the correct solution. Which is the correct solution is a business decision guided by a cost-benefit analysis. Cloud computing benefits include reduced capital costs as the individual customers no longer have to buy the hardware and system software that the CSP provides. The lowered capital expenses are offset by higher operating costs, as the customers must pay for the services used.

In many countries, capital expenses and operational expenses are treated very differently for tax purposes. For example, capital expenses may be written off or depreciated over a number of years. To write off the entire business of expenses of new infrastructure, purchased and installed on-premise could take many years. Operational expenses, such as the cost of cloud computing, can usually be written off as a business expense in the year the expense is incurred.

The business must understand the cost and tax implications of moving to the cloud or investing in on-premise infrastructure to make the choice most beneficial to the business. A move to the cloud may be as much (or more) for financial reasons than technical ones. This move should be considered only if the benefits justify the cost or if the benefits lower the costs.

Functional Security Requirements

Functional security requirements can make the move to cloud computing or the governance of cloud computing safer for a customer's information and processes. However, there remains some challenges with cloud computing, including portability, interoperability, and vendor lock-in.

These challenges can be lessened through the use of a vendor management process to ensure standard capabilities, clearly identifying the responsibilities or each party and the development of SLAs as appropriate. For complex or expensive systems, the RFP process can be utilized to clearly state customer requirements. The security requirements should be part of the requirements specified in the RFP and can be part of the process of choosing a vendor. A vendor that cannot meet the customer's security needs can be eliminated early on.

Portability

One-time movement is when a customer moves to a cloud platform, with no intention of moving it again. This is not common. In a modern environment, movement to and from the cloud as well as between cloud services and CSPs is much more common. These movements are not simply a forklift operation where you pick up some on-premises solution and data and drop it into a cloud account. Each CSP uses different tools and templates. So, a move from one CSP to another requires mapping to the other with the associated data cleanup. Moving from your own infrastructure to a CSP has the same challenge.

Frequent movement between CSPs and between a CSP and your own infrastructure is significantly more difficult, and data can be lost or modified in the process, violating availability and integrity rules. Portability means that the movement between environments is possible. Portable movement will move services and data seamlessly and may be automated.

The movement of data between software products is not a new issue. It can be complicated in the cloud by the need to continue paying for the old service while porting to the new one. This puts time pressure on the porting.

Interoperability

With customers using a variety of cloud services, often from different vendors, interoperability is an important consideration. In addition, some situations may require a private cloud sharing data with an on-premises solution. The ability to share data between tools and cloud environments and between clouds and corporate infrastructure is important. One issue is that the security tools and control sets differ between CSPs. A gap in security may result. Careful planning is essential, and the services of a cloud broker may also be warranted.

One way to improve the situation is through application programming interfaces (APIs). If properly designed, the API can bridge the security gap between services and allow the sharing of data and processes across multiple platforms. For example, if a SaaS tool is used to build a data inventory and supports the corporate data/system classification scheme, an API could be built to securely share that information with the governance, risk management, and compliance (GRC) or system inventory tool. This creates a single source for data, sharing it with relevant systems, and removes the potential of multiple data classification in different systems.

Vendor Lock-in

Solving the interoperability and portability challenges will go a long way toward ending vendor lock-in. This occurs when a customer is tied to a specific CSP and moving would incur significant costs including financial, technical, and legal. Vendor lock-in remains a significant concern with cloud computing. Continued advances in virtualization, improvements in portability and interoperability, and a careful design within a reference architecture have decreased this issue.

An additional concern is the use of CSP-specific services. If these are used to build capabilities, moving to a new CSP also impacts this additional capability. This is similar to using nonstandard features of a compiler in the development process. It locks you into that development environment.

One example is the use of AWS CloudTrail. CloudTrail allows auditing of your AWS account in support of governance, risk management, and compliance. If the decision is made to move away from AWS, the GRC functionality will have to be rebuilt with new services, either with the new CSP or with another vendor.

With additional improvements and careful architecture, vendor lock-in should become an issue in the past. Until then, the security challenges of cloud computing in general, and portability and interoperability in particular, remain.

Security Considerations for Different Cloud Categories

In a cloud environment, security responsibilities are shared between the service provider and the customer. In the SaaS model, the customer has the least responsibility, and in the IaaS model, the customer has the most responsibility. In a PaaS, the responsibility is shared more equally.

The Shared Responsibility Model for cloud services is commonly presented by the major vendors, which are all similar. There is an architecture stack. Some items in the stack are the responsibility of the CSP, and some are the responsibility of the customer. In between, there is an area of varied responsibility. At times, this middle area is the responsibility of the CSP and sometimes of the customer and sometimes both. It is important for the customer to know their responsibilities, especially in this middle region.

A typical architecture stack looks like this:

  • Data
  • APIs
  • Applications/solutions
  • Middleware
  • Operating systems
  • Virtualization (VMs, virtual local area networks)
  • Hypervisors
  • Compute and memory
  • Data storage
  • Networks
  • Physical facilities/data centers

It is generally understood that the CSP is responsible for the last five items on the list in all delivery models. However, where the line between customer and CSP exists varies beyond that.

The exact split and layer names vary by vendor, but the general principle remains the same. Both the CSP and the customer have some individual security responsibilities, and along the line where these meet, each may have some security responsibilities. The line for each delivery model is explained in the following sections.

Software as a Service

From a security standpoint, you have limited security options with a SaaS solution. Most of the security options are provided by the SaaS provider. The SaaS provider is responsible for the security of the infrastructure, operating system, application, networking, and storage of the information on their service.

In the Shared Responsibility Model, the customer is responsible for their data and may have some responsibility for the APIs. All other layers are the responsibility of the CSP.

The user of a SaaS solution has responsibilities as well. When a service is subscribed to by an organization or an individual, it is important to understand the security policies and procedures of the SaaS provider to the extent possible. In addition, the user determines how information is transferred to the SaaS provider and can do so securely through end-to-end encryption. The SaaS user is responsible for determining how the data is shared. Finally, the user can provide access security through proper use of login credentials, secure passwords, and multifactor authentication when available.

Platform as a Service

In a PaaS solution, security of the underlying infrastructure, including the servers, operating systems, virtualization, storage, and networking, remain the responsibility of the PaaS service provider. The developer is responsible for the security of any solutions developed, and the data used by their application, as well as the user responsibilities of a SaaS application regarding user access and use of the solutions developed.

In the Shared Responsibility Model, this means the customer is responsible for the data, APIs, and applications, with potentially some middleware responsibility.

Infrastructure as a Service

IaaS security leaves most of the responsibility of security with the customer. IaaS service providers secure the portions they are responsible for. These areas include the servers, virtualization, storage, and networking. The IaaS customer is responsible for the security of the operating system and everything built on top of it, including the responsibilities of a PaaS and a SaaS implementation.

In the Shared Responsibility Model, the customer is responsible for everything above the hypervisor. As in the other delivery models, the exact responsibility along this line can vary between the CSP and customer and must be clearly understood in each case.

EVALUATE CLOUD SERVICE PROVIDERS

Evaluation of CSPs is done through objective criteria. This becomes simpler if those criteria are a known standard. Standards are voluntary for some and required for others. However, the use of a standard makes comparisons between products and services more straightforward.

For example, FIPS 140-2, Federal Information Security Management Act (FISMA), and NIST standards are required for those working with the U.S. federal government. PCC DSS is contractually required by those accepting credit card payments.

Federal Information Processing Standards (FIPS), FISMA, and NIST may have been chosen as the standard in some industries but are suggestions and guidelines for everyone else. Internationally, Common Criteria and ISO standards have been chosen as required by some organizations, industries, and countries and serve as recommendations and guidelines for everyone else.

Verification against Criteria

Difference organizations have published compliance criterion. For cloud computing, these are currently regulatory or voluntary standards. The International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) standard is voluntary but may be necessary to work in some parts of the world and may prove advantageous even when not required. PCI DSS is a contractual requirement. The Payment Card Industry (PCI) Security Standards Council publishes the criteria that are required if you are a vendor that wants to accept credit cards as payment.

International Organization for Standardization/International Electrotechnical Commission

ISO/IEC 27017 and 27018 provide guidance for the implementation of cloud security and the protection of personally identifiable information (PII). 27017 added 35 supplemental controls and extended seven existing controls to the original ISO documents. Most CSPs were already compliant with these additional controls or could easily add them. Becoming compliant with this new standard is straightforward

ISO/IEC 27018 serves as a supplement to ISO 27002 and is specifically geared toward PII processors. Like 27017, these principles are recommendations and not requirements. 27018 added 14 supplementary controls and extended 25 other controls. As an international standard, adherence to this standard will help an organization address a wide and ever-changing data protection and privacy environment stretching from GDPR in the EU to standards in Russia, Brazil, the Philippines, and elsewhere around the globe.

While these are recommendations and not requirements, many international corporations strive to be ISO-compliant. In that case, the criteria provided by ISO/IEC become the governing principles of the organization, including the reference framework, cloud service models (of which there are seven instead of just SaaS, PaaS, and IaaS), and the implementation of controls from the approved control set. Auditing the controls and conducting a risk assessment should help identify which controls best address identified risk.

The ISO standard is important for companies in the international marketplace. These standards have wide acceptance throughout the world. These standards also provide an excellent framework for developing cloud services. Cloud services, because of their broad network access, are more international than many traditional IT services. An international standard is an important consideration.

Payment Card Industry Data Security Standard

The Payment Card Industry Data Security Standard released version 3.2.1 of PCI DSS in 2020. PCI is contractual compliance between the major credit card companies and the vendor. All cloud customers that accept credit cards must comply with all 12 requirements.

In the 12 requirements, the cloud is referenced in only one place and refers to the appendix for shared hosting requirements. These requirements can be summarized as follows:

  • Ensure that a customer's processes can only access their data environment.
  • Restrict customer access and privileges to their data environment.
  • Enable logging and audit trails that are unique to each environment, consistent with requirement 10.
  • Provide processes to support forensic investigations.

In addition to these requirements, the general auditability of the cloud environment would be beneficial in assuring compliance with PCI DSS 3.2.1.

System/Subsystem Product Certifications

The following are system/subsystem product certifications.

Common Criteria

Common Criteria (CC) is an international set of guidelines and specifications to evaluate information security products. There are two parts to CC:

  • Protection profile: Defines a standard set of security requirements for a specific product type, such as a network firewall. This creates a consistent set of standards for comparing like products.
  • Evaluation assurance level: Scored from level 1 to 7, with 7 being the highest. This measures the amount of testing conducted on a product. It should be noted that a level 7 product is not automatically more secure than a level 5 product. It has simply undergone more testing. The customer must still decide what level of testing is sufficient. One reason to not subject every product to level 7 is the cost involved.

The testing is performed by an independent lab from an approved list. Successful completion of this certification allows sale of the product to government agencies and may improve competitiveness outside the government market as CC becomes better known. The goal is for products to improve through testing. It also allows a customer to consider two versions of a security product.

FIPS 140-2

CC does not include a cryptographic implementation standard or test. CC is an international standard, and cryptographic standards are country specific. CC leaves cryptography to each country and organization.

For the U.S. federal government, the cryptographic standard is FIPS 140-2. Organizations wanting to do business with the U.S. government must meet the FIPS criteria. Organizations in regulated industries and nonfederal government organizations are increasingly looking to FIPS certification as their standard. As FIPS use increases, additional industries are expected to use FIPS as their cryptographic standard.

Cybersecurity companies are increasingly seeking FIPS certification to increase their market potential and maximize the value of their services.

FIPS requires that encryption (both symmetric and asymmetric), hashing, and message authentication use algorithms from an approved list. This list is in FIPS 140-2. For example, message authentication can use Triple-DES, AES, or HMAC. There are more algorithms out there than are allowed in FIPS.

Being considered FIPS-validated requires testing by one of a few specified labs through four levels of testing. Sometimes a product is referred to as FIPS-compliant, which is a much lower bar, indicating some components of the product have been tested, but perhaps not the entire product. It is important to read the fine print. Validated and compliant are not the same thing. A CCSP should also become familiar with the new FIPS 140-3, which will be replacing FIPS 140-2 over the next several years.

Summary

In order to discuss the cloud, each individual must be familiar with the terminology surrounding this technology. This understanding includes characteristics of cloud computing, as well as the service models and deployment models of cloud computing. It also includes the role of the CSP in cloud computing and the shared security model that exists between the CSP and the customer. Finally, the technologies that make cloud computing possible are discussed in this chapter alongside the emerging technologies that will support and transform cloud computing in the future. Understanding this chapter will make it easier to access the discussion in each of the following domains.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.146.105.194