Chapter 9
Secure Cloud and Virtualization

This chapter discusses securing virtualized, distributed, and shared computing. Virtualized computing has come a long way in the last 20 years, and it can be found everywhere today, from major businesses to small office, home office (SOHO) computing environments. Advances in computing have brought about more changes than just virtualization, including network storage and cloud computing. Cloud computing changed the concept of traditional network boundaries by placing assets outside the organization's perimeter.

In this chapter, we'll look at both the advantages and the disadvantages of virtualization and cloud computing as well as the concerns that they raise for enterprise security.

Implement Secure Cloud and Virtualization Solutions

A question that increasingly concerns security professionals is who has the data. With the rise of cloud computing, network boundaries are much harder to define. A network boundary is the point at which your control ends, and cloud computing does away with the typical network boundary. This fluid elasticity and scalability of a network boundary creates a huge impact because historically this demarcation line was at the edge of the physical network, a point at which the firewall is typically found.

The concept of cloud computing represents a shift in thought in that end users do not know the details of a specific technology. The service can be fully managed by the provider, and cloud consumers can use the service at a rate that is set by their particular needs. Cost and ease of use are two great benefits of cloud computing, but you must consider significant security concerns when contemplating moving critical applications and sensitive data to public and shared cloud environments. To address these concerns, the cloud provider must develop sufficient controls to provide the same or a greater level of security than the organization would have if the cloud was not used.

Cloud computing is not the only way in which network boundaries are changing. Telecommunicating and outsourcing have altered network boundaries. Telecommunicating allows employees to work from home and avoid the drive to the office. The work-from-home (WFH) model adopted during the COVID-19 pandemic affected healthcare, IT, education, nonprofit, sales, and marketing sectors, and those are just some of the industries to allow telecommuting.

Cloud computing can include virtual servers, services, applications, or anything you consume over the Internet. Cloud computing gets its name from the drawings typically used to describe the Internet. It is a modern concept that seeks to redefine consumption and delivery models for IT services. In a cloud computing environment, the end user may not know the location or details of a specific technology; it can be fully managed by the cloud service. Cloud computing offers users the ability to increase capacity or add services as needed without investing in new datacenters, training new personnel, or maybe even licensing new software. This on-demand, or elastic, service can be added, upgraded, and provided at any time.

Virtualization Strategies

Virtualization is a technology that system administrators have been using in datacenters for many years, and it is at the heart of cloud computing infrastructure. It is a technology that allows the physical resources of a computer (CPU, RAM, hard disk, graphics card, etc.) to be shared by virtual machines (VMs). Consider the old days when a single physical hardware platform—the server—was dedicated to a single-server application like being a web server. It turns out that a typical web server application didn't utilize many of the underlying hardware resources available. If you assume that a web application running on a physical server utilizes 30 percent of the hardware resources, that means that 70 percent of the physical resources are going unused, and the server is being wasted.

With virtualization, if three web servers are running via VMs with each utilizing 30 percent of the physical hardware resources of the server, 90 percent of the physical hardware resources of the server are being utilized. This is a much better return on hardware investment. By installing virtualization software on your computer, you can create VMs that can be used to work in many situations with many different applications.

A hypervisor is the software that is installed on a computer that supports virtualization. It can be implemented as firmware, which is specialized hardware that has permanent software programmed into it. It could also be hardware with installed software. It is within the hypervisor that the VMs will be created. The hypervisor allocates the underlying hardware resources to the VMs. Examples of hypervisor software are VMware's Workstation and Oracle's VM VirtualBox. There are free versions of each of these hypervisors that you can download and use. A VM is a virtualized computer that executes programs as a physical machine would.

A virtual server enables the user to run two, three, four, or more operating systems on one physical computer. For example, a virtual machine will let you run a Windows, Linux, or virtually any other operating system. They can be used for development, system administration, or production to reduce the number of physical devices needed. Exercise 9.1 shows how to convert a physical computer into a virtual image.

Virtualization sprawl is a common issue enterprise organizations have. Virtualization or VM sprawl happens when the number of machines on a network exceeds the point where system administrators can handle or manage them correctly or efficiently. To keep this from happening, strict policies should be developed and adhered to as well as using automation to stay on top of resources being used. Creating a VM library is helpful as long as you have a VM librarian to go with it.

Type 1 vs. Type 2 Hypervisors

Virtual servers can reside on a virtual emulation of the hardware layer. Using this virtualization technique, the guest has no knowledge of the host's operating system. Virtualized servers make use of a hypervisor too.

Hypervisors are classified as either Type 1 (I) or Type 2 (II). Type 1 hypervisor systems do not need an underlying OS, while Type 2 hypervisor systems do. A Type 1 hypervisor runs directly on the bare metal of a system. A Type 2 hypervisor runs on a host operating system that provides virtualization services. It will have its own operating system and be allocated physical hardware resources such as CPU, RAM, and hard disk, as well as network resources.

The host operating system is the operating system of the computer the hypervisor is being installed on. The guest operating system is the operating system of the VM that resides within the hypervisor.

The hypervisor validates all of the guest-issued CPU instructions and manages any executed code that requires additional privileges. VMware and Microsoft Hyper-V both use the hypervisor, which is also known as a virtual machine monitor (VMM). The hypervisor is the foundation of this type of virtualization; it accomplishes the following:

  • Interfaces with hardware
  • Intercepts system calls
  • Operates with the operating system
  • Offers hardware isolation
  • Enables multi-environment protection

Just like any environment, each hypervisor has its pros and cons. Some of the pros of running a VM are that you can run more than one OS at a time; you can install, reinstall, snapshot, roll back, or back up any time you want quite easily; and you manage the allocation of resources. The cons would be that performance may not be as robust as if you were on bare metal. USB and external hard drives can cause major issues, and some of us would rather roll back an image rather than take the time to troubleshoot an issue.

Modern computer systems have come a long way in how they process, store, and access information. Virtual memory is the combination of the computer's primary memory (RAM) and secondary storage. When these two technologies are combined, the OS lets application programs function as if they have access to more physical memory than what is actually available to them. Virtualization types can include the following:

  • Mainframe Virtual Machines This technology allows any number of users to share computer resources and prevents concurrent users from interfering with each other. Systems like the hardware-emulated IBM Mainframe z/OS built on AWS with IBM ZD&T falls into that category.
  • Parallel Virtual Machines The concept here is to allow one computing environment to be running on many different physical machines. Parallel virtual machines allow a user to break complex tasks into small chunks that are processed independently.
  • Operating System Virtual Machines This category of virtual systems creates an environment in which a guest operating system can function. This is made possible by the ability of the software to virtualize the computer hardware and needed services. VMware, XEN, and Oracle VM all fall into this category of virtualization.

Technologies related to virtual systems continue to evolve. In some cases, you may not need an entire virtual system to complete a specific task. In such situations, a container can now be used. Containers allow for the isolation of applications running on a server. Containers offer a lower-cost alternative to using virtualization to run isolated applications on a single host. When a container is used, the OS kernel provides process isolation and performs resource management. Determining when to use containers instead of virtualizing the OS mostly breaks down to the type of workload you have to complete. Containers allow for applications to be deployed faster and support accelerated development. Modern container technology was popularized by Docker in 2013. Since then, Google introduced the container organization platform Kubernetes. Other vendors include VMware Tanzu, Microsoft Azure Kubernetes Service, and Amazon Elastic Container Service.

Virtual machines and containers have many layers of implementation. Another method of creating an environment that takes the properties of one system into another is emulation. Emulators allow you to turn your PC into a Mac and play games designed for hardware that was built decades ago. Most emulators tend to run slower than the machine they are simulating. Dolphin Emulator is a free and open-source video game console that allows Nintendo GameCube or Wii games to be played on a PC or Android. Parallels is an emulator program that allows you to run Windows on a Mac computer. Application virtualization permits a user to access applications that are not installed on their devices, encapsulating the program from the OS they are executed on. The application experience is the same as if it were present on the end user's computer. The software allows applications to run on a variety of operating systems and web browsers.

Security Advantages of Virtualizing Servers

Virtualized servers have many advantages. One of the biggest is server consolidation. Virtualization lets you host many virtual machines on one physical server. This reduces deployment time and makes better use of existing resources. Virtualization also helps with research and development. Virtualization allows rapid deployment of new systems and offers the ability to test applications in a controlled environment. Virtual machine snapshots allow for easy image backup before changes are made and thus provide a means to revert to the previous good image quickly. From a security standpoint, you physically have to protect only one physical server where you may have had to protect many servers in the past. This is useful for all types of development testing and production scenarios.

Physical servers may malfunction or experience a hardware failure during important times or when most needed. In these situations, virtualization can be a huge advantage. Virtual systems can be imaged or replicated and moved to another physical computer very quickly. This aids the business continuity process and reduces outage time. Virtualization minimizes physical space requirements and permits the replacement of physical servers with fewer machines.

Security Disadvantages of Virtualizing Servers

With every advantage there is usually a drawback, and virtualization is no different. Virtualization adds another layer of complexity. Many books are available that explain how to manage a Microsoft server, but virtualization may result in your having a Microsoft server as a host machine with several Linux and Unix virtual servers or multiple Microsoft systems on a single Linux machine. This new layer of complexity can cause problems that may be difficult to troubleshoot. Vulnerabilities associated with a single physical server hosting multiple companies' virtual machines include the comingling of data. If this happens and a data breach occurs, your data may be affected. There can also be security issues when a single platform is hosting multiple companies' virtual machines. These can include the following:

  • Physical Access Anyone who has direct access to the physical server can most likely access the virtual systems.
  • Separation of Duties Are the employees who perform networking duties the same individuals who handle security of the virtual systems? If separation of duties is not handled correctly, a security breach may occur.
  • Misconfigured Platforms If the platform is misconfigured, it can have devastating consequences for all of the virtual systems residing on the single platform.

Virtualization also requires additional skills. Virtualization software and the tools used to work within a virtual environment add an extra burden on administrators because they will need to learn something new. Security disadvantages of virtualizing servers can also be seen in Type 1, Type 2, and container-based systems.

With Type 1 VMs, you manage guests directly from the hypervisor. Any vulnerabilities of the VMs must be patched. With Type 2 VMs, you also have the issue of the underlying OS and any vulnerabilities that it may have. A missed patch or an unsecured base OS could expose the OS, hypervisor, and all VMs to attack. Another real issue with Type 2 VMs is that such systems typically allow shared folders and the migration of information between the host and guest OSs. Sharing data increases the risk of malicious code migrating from one VM to the base system.

Some basic items to review for securing virtual systems include those in Table 9.1.

TABLE 9.1 Common security controls for virtual systems

ItemComments
AntivirusAntivirus must be present on the host and all VMs.
HardeningAll VMs should be hardened so that nonessential services are removed.
Physical controlsControls that limit who has access to the datacenter.
AuthenticationStrong access control.
Resource accessOnly administrative accounts as needed.
EncryptionUse encryption for sensitive data in storage or transit.
Remote Desktop ServicesRestrict when not needed. When it is required, use only 256-bit or higher encryption.

VDI

Remember dumb terminals and the thin client concept? This has evolved into what is known as the virtual desktop infrastructure (VDI). This centralized desktop solution uses servers to serve up a desktop operating system to a host system. Each hosted desktop virtual machine is running an operating system such as Windows 11 or Windows Server 2022. The remote desktop is delivered to the user's endpoint device via Remote Desktop Protocol (RDP), Citrix, or other architecture. Technologies such as RDP are great for remote connectivity, but they can also allow remote access by an attacker.

This system has lots of benefits, such as reduced onsite support and greater centralized management. However, a disadvantage of this solution is that there is a significant investment in hardware and software to build the backend infrastructure.

Tools that can be used for structural planning and construction of enterprise cloud instances for speed and ease of use include middleware and metadata.

Between the operating system and an application, middleware gives some communication functionality to the user, making a connection between any two clients, servers, or databases. Advantages of using middleware include faster deployment of applications in the cloud as well as in containerized environments.

Metadata in the cloud helps organize assets, data, and virtual instances so that it is easier to find, understand, and manage information. Many different metadata tags can be used from a template or created uniquely. Most tags have a field and a type for classification, and the type can be a string, a Boolean, or a date/time. Tags are usually optional unless they are explicitly required. The most important question to ask about metadata and tags is what information your organization wants to keep track of and how that metadata will be used. Metadata can be used for compliance and governance as well as grouping for cost analysis. Fields such as data_owner could be important to one department, while data_confidentiality or storage_location could be important to another department.

Deployment Models and Considerations

Cloud computing architecture can include various cloud deployment models and layers. Public use services are provided by an external provider. Private use services are implemented internally in a cloud design. A hybrid architecture offers a combination of public and private cloud services to accomplish an organization's goals. A community cloud service model is a shared and cooperative infrastructure where several organizations with common concerns share data and resources.

The following is a partial list of the top cloud provider companies:

  • Amazon Web Services (AWS)
  • Azure
  • GoogleCloud
  • AlibabaCloud
  • Salesforce
  • Adobe Creative Cloud
  • Dropbox
  • Digital Ocean
  • IBM Cloud
  • Dell

These providers offer a range of services including the following:

  • Public Clouds Available to the general public. An example would be Google Drive.
  • Private Clouds Operated for a single company or entity. An example would be a company's private cloud storage of travel expenses.
  • Hybrid Clouds A combination of a public and private cloud. An example would be a company's cloud storage of projects with varied access for internal employees and vendors.
  • Community Clouds Shared between several organizations. An example would be cloud storage for a group of schools or government offices.
  • Multitenancy Used to host a single software application that hosts multiple customers through a multitenant hosting model. An example would be a collaborative workspace for several project contributors.
  • Single Tenancy Hosts a single software application designed to support one customer through a single-tenant hosting model. An example would be a specialized HR application for one organization. Although single tenancy is more secure due to isolation and you control access, backups, and cost with scaling, it also requires more maintenance because single-tenant environments need more updates and upgrades that are managed by the customer.

Business Directives

On-demand, or elastic, cloud computing changes the way information and services are consumed and provided. Users can consume services at a rate that is set by their particular needs. Cloud computing offers several benefits, including the following:

  • Reduces Cost Cloud technology negates the need for procurement and maintenance of a company's own infrastructure and/or applications. Additionally, a cloud service is paid for as needed and can grow and shrink as business demands. This allows for scalability, which results in cost savings.
  • Increases Storage and Scalability

    Cloud providers have more storage capability that is elastic and has lower costs. These storage locations, for some global cloud providers, are regional in nature and redundant, with layers of security and data backups built in, so if one storage location goes down, the resources, applications, data, and access are still available.

  • Provides High Degree of Automation Fewer employees are needed because local systems have been replaced with cloud-based solutions. The user does not need IT personnel to patch and update servers that have been outsourced to the cloud.
  • Offers Flexibility and Data Protection Cloud computing offers much more flexibility than local-based solutions. Cloud data protection is practiced wherever the data is located, no matter if it is at rest or in motion. Being so flexible, security can be managed internally by the enterprise organization or externally by a third party or providers themselves.
  • Provides More Mobility with Variety of Locations One of the big marketing plugs is that users can access their data anywhere rather than having to remain at their desks. There are ways of deploying applications and data in the cloud to make it accessible from anywhere.
  • Allows the Company's IT Department to Shift Focus No hardware updates are required by the company—the cloud provider is now responsible. Companies are free to concentrate on innovation.

According to the International Data Corporation (IDC):

“The proliferation of devices, compliance, improved system performance, online commerce, and increased replication to secondary or backup sites is contributing to an annual doubling of the amount of information transmitted over the Internet.”

What this means is that we are now dealing with much more data than in the past. Servers sometimes strain under the load of stored and accessed data. The cost of dealing with large amounts of data is something that all companies must address.

There are also increased economic pressures to stay competitive. Companies are looking at cost-saving measures. Cloud computing provides much greater flexibility than previous computing models, but the danger is that the customer must perform due diligence.

The benefits of cloud computing are many. One of the real advantages of cloud computing is the ability to use someone else's storage. Another advantage is that when new resources are needed, the cloud can be leveraged, and the new resources may be implemented faster than if they were hosted locally at your company. With cloud computing, you pay as you go. Another benefit is the portability of the application. Users can access data from work, from home, or at client locations. There is also the ability of cloud computing to free up IT workers who may have been tied up performing updates, installing patches, or providing application support. The bottom line is that all of these reasons lead to reduced capital expense, which is what all companies are seeking. In Exercise 9.2 you will examine the benefits of cloud computing.

Service Models

Cloud models can be broken into several basic designs that include infrastructure as a service, monitoring as a service, software as a service, and platform as a service. Each design is described here:

  • Infrastructure as a Service Infrastructure as a service (IaaS) describes a cloud solution where you are buying infrastructure. You purchase virtual power to execute your software as needed. This is much like running a virtual server on your own equipment, except that you are now running a virtual server on a virtual disk. This model is similar to a utility company model, as you pay for what you use. An example of this model is Amazon Web Services, aws.amazon.com.
  • Monitoring as a Service Monitoring as a service (MaaS) offers a cloud-based monitoring solution. This includes monitoring for networks, application servers, applications, and remote systems. An example of this model is AppDynamics, a division of Cisco at www.appdynamics.com. It provides a Java-based MaaS solution.
  • Software as a Service Software as a service (SaaS) is designed to provide a complete packaged solution. The software is rented out to the user. The service is usually provided through some type of front end or web portal. While the end user is free to use the service from anywhere, the company pays a per-use fee. As an example, Salesforce is a customer relationship management service providing customer service, marketing automation, analytics, and application development; it offers this type of service at www.salesforce.com.
  • Platform as a Service Platform as a service (PaaS) provides a platform for your use. Services provided by this model include all phases of the software development life cycle (SDLC) and can use application programming interfaces (APIs), website portals, or gateway software. These solutions tend to be proprietary, which can cause problems if the customer moves away from the provider's platform. An example of PaaS is Google Workspace, workspace.google.com.

With so many different cloud-based services available, it was only a matter of time before security moved to the cloud. Such solutions are known as security as a service (SECaaS). SECaaS is a cloud-based solution that delivers security as a service from the cloud. SECaaS functions without requiring onsite hardware, and as such it avoids substantial capital expenses. The following are some examples of the type of security services that can be performed from the cloud:

  • Antispam Cloud-based antispam services can be used to detect spam email. Providers include SpamTitan, MailWasher, and MX Guarddog.
  • Antivirus Cloud-based antivirus applications offer a number of benefits, and they can be useful for quickly scanning a PC for malware. Two examples of such services are Webroot and Avast.
  • Anti-malware Cloud-based anti-malware monitors and reacts to more than viruses. Anti-malware stops a broader set of malicious software. A good example of such a tool is Malwarebytes.
  • Content Filtering This cloud service allows companies to outsource the content filtering service so that the cloud-based provider can manage and monitor all outbound and inbound traffic, so tools like FortiGuard and Cisco Umbrella would help.
  • Cloud Security Broker A cloud security broker will act as a gateway or go-between, being placed between an organization's infrastructure and the cloud service provider (CSP). The cloud security broker is becoming more commonly known as the cloud access security broker (CASB). There is no hard boundary on how a CASB functions or what benefits the organization can expect from a cloud security broker. The cloud security broker may react to threats, performing like an IDS/IPS, or the cloud security broker may send alerts on learned activity or inspect logs, performing more like SIEM. Realistically, most CASBs will function in both ways. According to Gartner's 2020 Magic Quadrant for CASB, the leading vendors are McAfee, Microsoft, Netskope, and Bitglass.
  • Hash Matching This service allows the user to search for known malicious files quickly or to identify known good files by searching online repositories for hash matches. One great example can be found at www.hashsets.com. This hash set is maintained by the National Software Reference Library (NSRL). These hashes can be used by law enforcement, government, and industry organizations to review files on a computer by matching file profiles in the database.
  • Sandboxing A cloud-based sandbox is a stand-alone environment that allows you to view or execute a program safely while keeping it contained. Good examples of sandbox services include Zscaler and FortiSandbox.
  • Managed Security Service Providers Managed security service providers (MSSPs) provide outsourced monitoring and management of security devices and systems. Common services include managed firewall, intrusion detection, virtual private network, vulnerability scanning, and antiviral services. MSSPs use high-availability security operations centers at their own facilities or from other datacenter providers to provide 365 × 24 × 7 services designed to help reduce the number of operational security personnel an enterprise needs to hire, train, and retain in order to maintain an acceptable security posture.
  • Vulnerability Scanning Many companies don't have the expertise or capability to perform all of the security services they need. One such service that can be outsourced is vulnerability scanning. Cloud-based solutions offload this activity to a third-party provider.

From a security standpoint, one of the first questions that must be answered in improving the overall security posture of an organization is where data resides. The advances in technology make this much more difficult than in the past. Years ago, Redundant Array of Inexpensive/Independent Disks (RAID) was the standard for data storage and redundancy. Today, companies have moved to dynamic disk pools (DDPs) and cloud storage. DDP shuffles data, parity information, and spare capacity across a pool of drives so that the data is better protected and downtime is reduced. DDPs can be rebuilt up to eight times faster than traditional RAID.

Enterprise storage infrastructures may not have adequate protection mechanisms. The following basic security controls should be implemented:

  • Know your assets. Perform an inventory to know what data you have and where it is stored.
  • Build a security policy. A corporate security policy is essential. Enterprise storage is just one item that should be addressed.
  • Implement controls. The network should be designed with a series of technical controls, such as the following:
    • Intrusion detection system (IDS)/intrusion prevention system (IPS)
    • Firewalls
    • Network access control (NAC)
  • Harden your systems. Remove unnecessary services and applications.
  • Perform proper updates. Use patch management systems to roll out and deploy patches as needed.
  • Segment the infrastructure. Segment areas of the network where enterprise storage mechanisms are used.
  • Use encryption. Evaluate protection for data at rest and for data in transit.
  • Implement logging and auditing. Enterprise storage should have sufficient controls so that you can know who attempts to gain access, what requests fail, when changes to access are made, or when other suspicious activities occur.
  • Use change control. Use change control and IT change management to control all changes. Changes should occur in an ordered process, documented with a plan to roll back if systems crash.
  • Implement trunking security. Trunking security is typically used with VLANs. The concept is to block access to layer 2 devices based on their MAC addresses. Blocking a device by its MAC address effectively prevents the device from communicating through any network switch. This stops the device from propagating malicious traffic to any other network-connected devices.
  • Employ port security. When addressing the control of traffic at layer 2 on a switch, the term used today is port security. Port security specifically speaks to limiting what traffic is allowed in and out of particular switch ports. This traffic is controlled per layer 2 address or MAC address. One example of typical port security is when a network administrator knows that a fixed set of MAC addresses are expected to send traffic through a switch port. The administrator can employ port security to ensure that traffic from no other MAC address will be allowed to use that port and traverse the switch.

Cloud Provider Limitations

Cloud computing has many benefits, but there are disadvantages as well, especially with smaller organizations.

  • Downtime Loss of access is one of the biggest disadvantages and can occur for any reason.
  • Privacy Cloud service providers are expected to manage and safeguard the underlying hardware infrastructure, but with recent credit card breaches and login credentials stolen, you have to be aware of best practices.
  • Limited control To different degrees, cloud users find they have less control over cloud-hosted infrastructure, and switching between cloud services can be difficult.
  • Limited Internet Protocol (IP) address scheme Cloud providers can limit IP address ranges and availability so what is used in the cloud is different than on premises. It is also usually dynamic, so users cannot force an on-premises IP address to route properly to a cloud provider.
  • VPC Peering Limits A virtual private cloud (VPC) can experience networking connectivity issues because of incorrect or missing route tables. Only one connection can exist between two VPCs at the same time and they must be able to communicate as if they are on the same network using private IP addressing. A virtual private cloud (VPC) customer has exclusive access to a segment of a public cloud. This deployment is a compromise between a private and a public model in terms of price and features. Access can also be restricted by the user's physical location by employing firewalls and IP address whitelisting. Using the cloud is a trade-off—you gain speed, performance, and cost, but you lose control over the security processes.

Extending Appropriate On-Premises Controls

Although cost and ease of use are two great benefits of cloud computing, there are significant security concerns when considering on-demand/elastic cloud computing.

Cloud computing is a big change from the way IT services have been delivered and managed in the past. One of the advantages is the elasticity of the cloud, which provides the online illusion of an infinite supply of computing power. Cloud computing places assets outside the owner's security boundary. Historically, items inside the security perimeter were trusted, whereas items outside were not. With cloud computing, an organization is being forced to place their trust in the cloud provider. The cloud provider must develop sufficient controls to provide the same or a greater level of security than the organization would have if the cloud were not used.

As a CASP+, you must be aware of the security concerns of moving to a cloud-based service. The pressures are great to make these changes, but there is always a trade-off between security and usability. Here are some basic questions that a security professional should ask when considering cloud-based solutions and the controls that must be put in place.

  • Does the data fall under regulatory requirements? Different countries have different requirements and controls placed on access. For example, organizations operating in the United States, Canada, or the European Union have many regulatory requirements. Examples of these include ISO 27002, Safe Harbor, Information Technology Infrastructure Library (ITIL), and Control Objectives for Information and Related Technology (COBIT). The CASP+ is responsible for ensuring that the cloud provider can meet these requirements and is willing to undergo certification, accreditation, and review as needed.
  • Who can access the data? Defense in depth is built on the concept that every security control is vulnerable. Cloud computing places much of the control in someone else's hands. One big area that is handed over to the cloud provider is access. Authentication and authorization are real concerns. Because the information or service now resides in the cloud, there is a much greater opportunity for access by outsiders. Insiders might pose an additional risk.

    Insiders, or those with access, have the means and opportunity to launch an attack and only lack a motive. Anyone considering using the cloud needs to look at who is managing their data and what types of controls are applied to individuals who may have logical or physical access.

  • Does the cloud provider use a data classification system? A CASP+ should know how the cloud provider classifies data. Classification of data can run the gamut from a fully deployed classification system with multiple levels to a simple system that separates sensitive and unclassified data. Consumers of cloud services should ask whether encryption is used and how one customer's data is separated from another customer's data. Is encryption being used for data in transit or just for data at rest? Consumers of cloud services will also want to know what kind of encryption is being used. For instance, is the provider using Advanced Encryption Standard (AES) 128 or 256? How are the keys stored? Is the encryption mechanism being used considered a strong one? One strong control is virtual private storage, which provides encryption that is transparent to the user. Virtual private storage is placed in your screened subnet and configured to encrypt and decrypt everything that is coming and going from your network up to the cloud.
  • What training does the cloud provider offer its employees? This is a rather important item in that people will always be the weakest link in security. Knowing how your provider trains their employees is an important item to review. Training helps employees know what the proper actions are and understand the security practices of the organization.
  • What are the service level agreement (SLA) terms? The SLA serves as a contracted level of guaranteed service between the cloud provider and the customer. An SLA is a contract that provides a certain level of protection. For a fee, the vendor agrees to repair, replace, or provide service within a contracted period of time. An SLA is usually based on what the customer specifies as the minimum level of services that will be provided.
  • Is there a right to audit? This particular item is no small matter in that the cloud provider should agree in writing to the terms of audit. Where and how is your data stored? What controls are used? Do you have the right to perform a site visit or review records related to access control or storage?
  • Does the cloud provider have long-term viability? Regardless of what service or application is being migrated to a cloud provider, you need to have confidence in the provider's long-term viability. There are costs not only to provision services but also for de-provisioning should the service no longer be available. If they were to go out of business, what would happen to your data? How long has the cloud provider been in business, and what is their track record? Will your data be returned if the company fails and, if so, in what format?
  • How will the cloud provider respond if there is a security breach? Cloud-based services are an attractive target for computer criminals. If a security incident occurs, what support will you receive from the cloud provider? To reduce the amount of damage that these criminals can cause, cloud providers need to have incident response and handling policies in place. These policies should dictate how the organization handles various types of incidents. Cloud providers must have a computer security incident response team (CSIRT) that is tied into customer notification policies for law enforcement involvement.
  • What is the business continuity and disaster recovery plan (BCDR)? Although you may not know the physical location of your services, they are physically located somewhere. All physical locations face threats, such as fire, storms, natural disasters, and loss of power. In case of any of these events, the CASP+ will need to know how the cloud provider responds and what guarantee of continued services they are promising. There is also the issue of retired, replaced, or damaged equipment. Items such as hard drives need to be decommissioned properly. Should sensitive data be held on discarded hard drives, data remanence is a real issue. Data remanence is the remaining data, or remnants, that remain on the media after formatting or drive wiping. The only way to ensure there are no data remnants is to physically destroy the media.

In Exercise 9.3, you will examine some common risks and issues associated with cloud computing as they would affect your organization.

Data Sovereignty and the Cloud

Data sovereignty refers to a country's laws and the control that country has over the data that resides within its jurisdiction. A country's data laws could restrict the cross-border transfer of data, imposing legal requirements that may conflict with those of the country in which the user currently resides. Data laws can impose jurisdiction over data that may change as the data is transferred across borders. Legal obligations are different from privacy, data security, and transfer obligations that may apply if the data is hosted within different countries or is controlled by different cloud providers.

There is no known uniform, worldwide regulation that governs the protection of a user's data, but the General Data Protection Regulation (GDPR) comes as close as any standard to meeting this objective so far. Laws of various countries are often different in terms of where the data is stored and where the third-party storage provider is based. For example, a U.S.-based company may opt to store financial data in Ireland or protected health information (PHI) in Germany. As mentioned, the GDPR is an example of a more recent regulation that affects any online organization that collects or processes the personal data of people in the European Union (EU) countries. GDPR is a regulation, very specific to the area of data privacy, and applies externally—outside of Germany, as opposed to, say, a German law. As of May 25, 2018, any such organization must ensure compliance with the GDPR or face substantial penalties. The GDPR is the strongest case of data sovereignty through regulation to date.

To complicate matters further, some countries have laws against overly strong encryption. This can result in complex compliance issues.

When addressing potential data sovereignty issues, corporations can begin the process by analyzing the different technical, legal, and business issues. Corporations should also conduct a detailed analysis of the following:

  • Legal issues to include licenses, industry regulation, labor laws, intellectual property, and digital assets within the corporations
  • Applications of particular provisions of applicable laws on data sovereignty, which are relevant within jurisdictions

Cloud Computing Vulnerabilities

Computer criminals always follow the money, and as more companies migrate to cloud-based services, look for the criminals to follow. Here are some examples of attacks to which cloud services are vulnerable:

  • Authentication Attacks Authentication systems may not adequately protect your data. Authentication is a weak point in many systems, and cloud-based services are no exception. There are many types of authentication attacks, including cross-site scripting (XSS) and cross-site request forgery (CSRF). The mechanisms and methods used to secure the authentication process are a frequent target of attackers.
  • Denial of Service A denial-of-service (DoS) attack seeks to disrupt availability or access to a service, application, infrastructure, or some other resource that is normally available. The attack begins by sending a massive wave of illegitimate requests over the network. The massive amount of traffic can overwhelm nearby network devices or a targeted system, thus preventing authorized users from having access. In some cases, this traffic originates not from one source, but from many. When the attack is a shared effort by several sources, it is called a distributed denial-of-service (DDoS) attack. In recent years, DDoS attacks have increased significantly, often for financial gain. DDoS attacks can be launched for extortion, so-called hacktivism, or other reasons to disrupt normal operations. Tools such as Low Orbit Ion Cannon (LOIC) are easily accessible for these activities. Cybercriminals often use botnets to launch the attacks.
  • Data Aggregation and Data Isolation Sometimes too much or too little of something can be a bad thing. For example, can a cloud provider use the data for its own purposes? Can the provider aggregate your data along with that of other clients and then resell this information? Also, is your data on a stand-alone server or is it on a virtual system that is shared with others? In such cases, your data may be stored along with data from other companies. This raises concerns about the comingling of data.
  • Data Remanence Your data will most likely not be needed forever. This means that data disposal and destruction are real concerns. An attacker could attempt to access retired hard drives and look for remaining data. Even in situations where the drives have been formatted or wiped, there may be some remaining data. The remaining data (data remanence) could be scavenged for sensitive information. Other data exfiltration techniques include hacking backups or using backdoors and covert channels to send data back to the attacker. To achieve this end, social engineering techniques are used, such as going after cloud employees with access and targeting the cloud employees at their homes, since so many engineers maintain less secured paths back to their work networks.

Other kinds of attacks include keyloggers, custom malware sent via phishing (such as malicious PDFs), and trojaned USB keys dropped in the cloud provider employee parking lot. A dedicated attacker who is targeting a big enough cloud provider might even apply for a job at the facility, simply to gain some level of physical access.

All systems have an inherent amount of risk. The goal of the security professional is to evaluate the risk and aid management in deciding on a suitably secure solution. Cloud computing offers real benefits to companies seeking a competitive edge in today's economy. Many more providers are moving into this area, and the competition is driving prices even lower.

Attractive pricing, the ability to free up staff for other duties, and the ability to pay for services as needed will continue to drive more businesses to consider cloud computing. Before any services are moved to the cloud, the organization's senior management should assess the potential risk and understand any threats that may arise from such a decision. One concern is that cloud computing blurs the natural perimeter between the protected inside and the hostile outside. Security of any cloud-based services must be closely reviewed to understand what protections exist for your information. There is also the issue of availability. This availability could be jeopardized by a DoS attack or by the service provider suffering a failure or going out of business. Also, what if the cloud provider goes through a merger? What kind of policy changes occur? What kind of notice is provided in advance of the merger? All of these issues should be covered in the contract.

Unfortunately, one of the largest vulnerabilities in the cloud is simple customer error or misconfiguration. Cloud misconfiguration can be any errors or gaps that leave risk exposure. This risk could be exploited by an attacker or malicious insider, and it doesn't take much technical knowledge to extract data or compromise cloud assets. Security researchers disclosed that a nonprofit organization in Los Angeles exposed more than 3.5 million records including PII because an AWS S3 storage bucket leaked databases of information because they were programmed to be “public and anonymously accessible.” Misconfigured cloud services pose a high security risk, so make sure the people administering your cloud are well trained.

Storage Models

Even though your data is in the cloud, it must physically be located somewhere. Is your data on a separate server, is it co-located with the data of other organizations, or is it sliced and diced so many times that it's hard to know where it resides? Your cloud storage provider should agree in writing to provide the level of security required for your customers.

Tape was the medium of choice for backup and archiving for most businesses for many years. This was in part due to the high cost of moving backup and archival data to a data warehouse. Such activities required hundreds of thousands of dollars in infrastructure investment. Today that has started to change as cloud service providers are beginning to sell attractively priced services for cloud storage. Such technologies allow companies to do away with traditional in-house technologies. Cloud-based archiving and warehousing have several key advantages.

  • Content Management The cloud warehousing provider manages the content for you.
  • Geographical Redundancy Data is held at more than one offsite location.
  • Advance Search Data is indexed so that retrieval of specific datasets is much easier.

How much storage is enough? How big a hard drive should I buy? These are good questions—there never seems to be enough storage space for home or enterprise users. Businesses are no different and depend on fast, reliable access to information critical to their success. This makes enterprise storage an important component of most modern companies. Enterprise storage can be defined as computer storage designed for large-scale, high-technology environments.

Think of how much data is required for most modern enterprises. There is a huge dependence on information for the business world to survive. Organizations that thrive on large amounts of data include government agencies, credit card companies, airlines, telephone billing systems, global capital markets, e-commerce, and even email archive systems. Although the amount of storage needed continues to climb, there is also the issue of terminology used in the enterprise storage market. Terms such as heterogeneous, SAN, NAS, virtualization, and cloud storage are frequently used.

Before any enterprise storage solution is implemented, a full assessment and classification of the data should occur. This would include an analysis of all threats, vulnerabilities, existing controls, and the potential impact if loss, disclosure, modification, interruption, or destruction of the data should occur.

Now that we've explored some of the security issues of enterprise storage, let's look at some of the technologies used in enterprise storage.

  • Virtual Storage Virtual storage options have grown, evolved, and matured. These online entities typically focus either on storage or on sharing. The storage services are designed for storing large files. Many companies are entering this market and now giving away storage, such as Microsoft's OneDrive, Amazon Drive, and Google Drive.

    Virtual file sharing services are a second type of virtual storage. These services are not meant for long-term use. They allow users to transfer large files. Examples of these services include Dropbox, DropSend, and MediaFire. These virtual services work well if you are trying to share very large files or move information that is too big to be sent as an attachment.

    On the positive side, there are many great uses for these services, such as keeping a synchronized copy of your documents in an online collaboration environment, sharing documents, and synchronizing documents between desktops, laptops, tablets, and smartphones.

    The disadvantages of these services include the fact that you are now placing assets outside the perimeter of the organization. There is also the issue of loss of control. If these providers go out of business, what happens to your data? Although these services do fill a gap, they can be used by individuals to move data illicitly. Another concern is the kind of controls placed on your data. Some of these services allow anyone to search sent files.

    In Exercise 9.4, you'll look at security issues involved in online storage.

  • Network-Attached Storage Network-attached storage (NAS) is a technology that contains or has slots for one or more hard drives using a hierarchical storage methodology. These hard drives are used for network storage. NAS is similar to direct access storage (DAS), but DAS is simply an extension of one system and has no networking capability.

    Many NAS devices make use of the Linux OS and provide connectivity via network file sharing protocols. One of the most common protocols used is Network File System (NFS). NFS is a standard designed to share files and applications over a network. NFS was developed by Sun Microsystems (now part of Oracle) back in the mid-1980s. The Windows-based counterpart used for file and application sharing is Common Internet File System (CIFS); it is an open version of Microsoft's Server Message Block (SMB) protocol.

    For the CASP+ exam, this is often referred to as object storage or file-based storage. This type of cloud solution is seen in Amazon Simple Storage Services (S3).

  • SAN The Storage Network Industry Association (SNIA) defines a storage area network (SAN) as “a data storage system consisting of various storage elements, storage devices, computer systems, and/or appliances, plus all the control software, all communicating in efficient harmony over a network.” SANs are similar to NAS. One of the big differences is that NAS appears to the client as a file server or stand-alone system. A SAN appears to the client OS as a local disk or volume that is available to be formatted and used locally as needed.

    For the CASP+, this is referred to in the objectives as block storage. Block cloud storage solutions include Amazon Elastic Block Store (EBS) and are provisioned with ultra-low latency for high performance.

  • Virtual SAN A virtual SAN (VSAN) is a SAN that offers isolation among devices that are physically connected to the same SAN fabric. A VSAN is sometimes called fabric virtualization. (Fabric can be defined as the structure of the SAN.) VSANs were developed to support independent virtual networking capability on a single switch. VSANs improve consolidation and simplify management by allowing for more efficient SAN utilization. A resource on any individual VSAN can be shared by other users on a different VSAN without merging the SAN's fabrics.
  • Redundancy (Location) Location redundancy is the idea that content should be accessible from more than one location. An extra measure of redundancy can be provided by means of a replication service so that data is available even if the main storage backup system fails. This further enhances a company's resiliency and redundancy. Database storage using shadowing, remote journaling, and electronic vaulting are all common methods used for redundancy. Electronic vaulting describes the transfer of data by electronic means rather than a physical shipment of backup tapes. Some organizations use these techniques by themselves, whereas others combine these techniques with other backup methods.
  • Secure Storage Management and Replication Secure storage management and replication systems are designed to enable a company to manage and handle all corporate data in a secure manner with a focus on the confidentiality, integrity, and availability of the information. The replication service allows for the data to be duplicated in real time so that additional fault tolerance is achieved.
  • Multipath Solutions

    Enterprise storage multipath solutions reduce the risk of data loss or lack of availability by setting up multiple routes between a server and its drives. The multipathing software maintains a list of all requests, passes them through the best possible path, and reroutes communication if one of the paths dies. One of its major advantages is its speed of access.

  • SAN Snapshots SAN snapshot software is typically sold with SAN solutions and offers the user a way to bypass typical backup operations. The snapshot software has the ability to stop writing to a physical disk temporarily and then make a point-in-time backup copy. Think of these as being similar to Windows System Restore points in that they allow you to take a snapshot in time. Snapshot software is typically fast and makes a copy quickly, regardless of the drive size.
  • Data De-duplication (DDP) Data de-duplication is the process of removing redundant data to improve enterprise storage utilization. Redundant data is not copied. It is replaced with a pointer to the one unique copy of the data. Only one instance of redundant data is retained on the enterprise storage media, such as disk or tape.

Storage Configurations

Data dispersion consists of information being distributed and stored in multiple cloud pods, which is a key component of cloud storage architecture. The ability to have data replicated throughout a distributed storage infrastructure is critical. This allows a cloud service provider to offer storage services based on the level of the user's subscription or the popularity of the item. Bit splitting is another technique for securing data over a computer network that involves encrypting data, splitting the encrypted data into smaller data units, distributing those smaller units to different storage locations, and then further encrypting the data at its new location. Data is protected from security breaches, because even if an attacker is able to retrieve and decrypt one data unit, the information is useless unless it can be combined with decrypted data units from the other locations.

Security Implications with Storage

Whether you are storing objects, files, databases, blocks of data, or binary large objects (BLOBs) in the cloud, there are several best practices that help accomplish the safety of your information.

  • Apply data protection policies.
  • Limit access and implement classification.
  • Prevent data exfiltration to unmanaged devices.
  • Audit configuration for critical settings.

Encryption of sensitive data in the cloud is a vital security step. There are many ways to implement key design with a data store. A data store is a repository for storing and managing a collection of data.

A key/value store associates each data value to a specific and unique key. To modify a value, the key is overwritten using an application that replaces the entire value. While a single key/value is extremely scalable, it can also distribute that key across multiple instances. Amazon DynamoDB is probably the most well-known key/value store. A key-value pair consists of two related pieces of data. The key is a constant, such as color, and a value, such as an article of clothing, which belongs to that set of data. A fully formed key-value pair could be the “color=green, clothing=shirt” pair. In addition to cloud-based storage sites, other storage types pose security and privacy concerns. The actual risks depend largely on whether storage is nonremovable or removable. USB On-The-Go (USB OTG) is the solution to the problem of not being able to connect a standard USB flash drive directly to a mobile device. USB OTG is flash drive storage with a physical interface capable of attaching to almost every smartphone or small form-factor device. The risk of misplacing this portable storage is based on how private or critical the information stored on it is.

Other removable storage, such a swappable drive from a larger device, also carries the risk of being easily misplaced. Most likely, the removable storage would be maliciously stolen from its bay. The malicious person may transfer or send backup data to removable or uncontrolled storage.

In Exercise 9.5, you will use the cloud to store and transfer a large file.

How Cloud Technology Adoption Impacts Organization Security

Years ago, many organizations resisted cloud technology adoption because of the lack of control and understanding. There are risks, threats, and vulnerabilities no matter where the data is stored. Moving to the cloud is a big decision, and modern cloud computing has many benefits including increased security, flexibility, and cost savings.

Automation and Orchestration

Cloud automation is technology that does not require human intervention in processes and procedures. By having decisions made based on relationships and the actions that should be taken, human-made mistakes can be avoided, and processes that required involvement of IT staff can happen automatically. By using automation of a single task or orchestration of many automated tasks, enterprise organizations improve standard operating procedures for specific use cases as well as increasing efficiency and consistency. There are several cloud orchestration tools including software like Puppet, Ansible, and Chef. These three tools are fairly simple to use and have robust capabilities. Puppet works best with automated provisioning of assets, configuration automation with great visualization, and reporting. Chef is used more for compliance and security management, while Ansible is the easiest of all three to implement; Ansible is good with simple orchestration but does not scale in large environments as well as the other two.

Encryption Configuration

Sensitive information is being moved to the cloud now more than ever. According to the Ponemon Institute, the average cost of a breach is now more than $3.8 million USD. One of the most essential elements of preventing the loss of data being moved to the cloud is having robust encryption for the data while at rest as well as in transit.

Encryption makes data unreadable to anyone without access to the encryption keys. Kerckhoffs's principle states that a cryptosystem should be secure, even if everything about the system, except the key, is public knowledge. Only 1 percent of cloud providers support tenant managed encryption keys. With the right tools and configuration, you can protect data with standards-based AES encryption for data at rest, ensuring compliance with PCI, HIPAA, and other federal or industry requirements.

Cloud encryption solutions can encrypt information as it moves in and out of applications and into storage with strong key-based encryption. Most reputable cloud service providers offer cloud encryption options. The most used type of cloud data in transit encryption is the HTTPS protocol. When using the more modern and secure version of Secure Sockets Layer (SSL) called Transport Layer Security (TLS), all traffic is encoded so only authorized users can access the data. If an unauthorized third party sees the data, it remains unreadable because the digital keys to lock and unlock it are at the user and destination layers. Keys should be generated and issued using an asymmetrical algorithm between trusted entities, while certificates are certified during the original connection.

Logs

Every device on a modern cloud network generates logs. Some logs are human readable, and some logs look like gibberish. Some logs are more useful than others, and we should understand which cloud logs need to be preserved for future analysis and for how long. You don't need to log everything, but what you do log should be purposely collected and managed because the logs can show you who did what activity and how the systems they touched responded.

The Center for Internet Security (CIS) Critical Security Controls Version 8 focuses on the collection, maintenance, monitoring, and analysis of audit logs. Our organizations are evolving quickly, and we have to learn to deal with log data in the big data cloud era. Analyzing audit logs is a vital part of security, not just for system security but for processes and compliance. Part of the process of log analysis is reconciling logs from different sources and correlation even if those devices are in different time zones. Network Time Protocol (NTP) will help synchronize devices using the cloud. Google Cloud has its own NTP protocol called Google Public NTP.

In a basic network topology, you will have many types of devices, including routers, switches, firewalls, servers, and workstations. Each of these devices that helps connect you to the rest of the world will generate logs based on its operating system, configuration, and software. Examining logs is one of the most effective ways of looking for issues and investigating problems happening on a system or in an application.

Synchronization and the ability to correlate the data between these devices are vital to a healthy environment. Attackers can hide their activities on assets if logging is not done correctly; therefore, you need a strategic method of consolidating and auditing all your logs. Without solid audit log analysis, an attack can go unnoticed for a long time. According to the 2021 Verizon Data Breach Investigations Report, The Verizon Threat Research Advisory Center intelligence collections in both 2019 and 2020 began with cyber espionage targeting cloud environments by the Chinese menuPass threat actor. Among the ongoing threats were attacks on remote access. The full report was based on detailed analysis of more than 79,600 security incidents, including 5,258 data breaches. You can download the full details at www.verizon.com/business/resources/reports/dbir/2021/year-in-review-2021.

Logging involves collecting data from a large number of data sources, which has its own challenges including collection, storage, encryption, and parsing. Key considerations when configuring logs for collection include normalization, alerting, security, correlation. availability, monitoring, and analysis.

Normalization or parsing logs enables analysis of the data. Parsing the logs into specific fields allows for easier reading, correlation, and analysis. Correlation means that you are able to connect the dots to identify a sequence of events that have the potential to be a breach. Monitoring and alerting for specific events, or after analysis has been done on several scenarios that are being monitored for security incidents, is important because it is a more proactive approach. To be able to go back into storage requires availability. The logging solution chosen must make sure that the logs are not only secure but provide data compression and other procedures to address the high volume.

Without logging, a threat actor can be in an environment and fly completely under the radar. There are many solutions for logging for audit and centralization, including Elasticsearch, Logstash, and Kibana (ELK), which is the most common open-source solution used. There are some security information and event management (SIEM) tools that are customizable for security analytics. SIEM tools centralize information gathering and analysis and provide detailed dashboards and reporting that allow critical information to be seen through visualization, and they offer manual analysis as well as automated analysis capabilities. They work with logging infrastructures using tools such as syslog, syslog-ng, or others that gather and centralize logs, building logging infrastructures that capture evidence used for incident analysis and creating an audit trail. At the same time, additional information captured such as network flows and traffic information, file and system metadata, and other artifacts are used by responders who need to analyze what occurred on a system or network.

Gartner's Magic Quadrant report for 2021 includes Splunk, Rapid7 InsightIDR, LogRhythm, and Exabeam. These SIEM tools work by taking a baseline of two to four weeks of log ingestion to learn the normal state of an organization and then can start monitoring and alerting for anomalies.

Monitoring Configurations

The cloud is not a single object. There are many moving parts that affect performance and availability, with each part needing to work well with other parts. When looking at monitoring a cloud environment, you have to watch the network, the individual VMs, the databases, the websites, and storage. With the network, cloud administrators have to watch for connectivity to make sure they are not overwhelmed with traffic. The VMs will have to be monitored for access and status to make sure they are operating as intended. Database monitoring is incredibly important because of what is in the database, usually sensitive organizational data. Databases will need to be monitored for queries, access requests, integrity, and backups. Proactively monitoring websites will allow for optimal uptime, and storage is costly, so making sure performance and analytics are kept within proper ranges will keep expenses down.

Some cloud monitoring best practices may include the following:

  • Decide the most important metrics and events
  • Choose the right cloud monitoring software
  • Monitor all your cloud infrastructure from one platform
  • Automate monitoring tasks
  • Track end-user experience
  • Test for cloud failure
  • Monitor services and costs

It is difficult to keep up with shifting policies and compliance, but with automation open-source software like Puppet or other tooling such as SolarWinds, the organization must be able to monitor the cloud ecosystem for changes and revert as needed. It can be time-consuming and tedious to manually do audit inspections to prepare for an auditor. Puppet is mostly used on Linux and Windows, but it's capable of managing infrastructure through continuous monitoring using policy as code. For more information, visit puppet.com/use-cases/continuous-compliance.

Key Ownership and Location

Chapter 6, “Cryptography and PKI,” covered public key infrastructure, which allows two parties to communicate securely even if they were previously unknown to one another. This chapter covers the certificate authority (CA), registration authority (RA), certificate revocation lists (CRLs), digital certificates, and how they are distributed. The question this chapter covers is how this process can be different in a cloud-based ecosystem.

Most cloud service providers offer some type of encryption for customer data. As mentioned earlier in this chapter, protecting data in transit using HTTPS in the cloud between servers or user devices is reliable and straightforward. Encryption protection becomes more complicated for data at rest on a cloud server. Cloud providers can encrypt the data and maintain control over the keys. This can be a security risk because now a company is dealing with malicious insiders and nefarious outsiders who target the cloud provider. Many organizations are not willing to use cloud-based storage for their most sensitive data.

The other two options for key ownership and location are bring your own key (BYOK) and hold your own key (HYOK). With BYOK, the customer can generate and manage encryption keys, but the cloud provider has access to them. With HYOK, customers generate, manage, and store them, and the cloud provider is not able to see the contents of the encrypted files.

Key Life-Cycle Management

With the use of keys in securing cloud environments comes the need for a key management system. National Institute of Standards and Technology (NIST) special publication 800-57 Part 2, Revision 1, gives recommendations for key management and best practices and introduces a set of key management concepts such as key life cycle, practice statements, policies, and planning documents.

Key life-cycle management refers to everything from the creation of to the retirement of cryptographic keys. There are several key life-cycle management models that can be used such as NIST or Microsoft. The six states a key goes through in the Microsoft Key Life-Cycle Model are as follows:

  • Creation The key object is created on a domain controller.
  • Initialization The key object has attributes set.
  • Full Distribution The initialized key is available to all domain controllers.
  • Active The initialized key is available for cryptographic operations.
  • Inactive The initialized key is unavailable for some cryptographic operations.
  • Terminated The initialized key is permanently deleted from all domain controllers.

Defining and enforcing key management policies will influence each state of the life cycle and needs to be governed by a key usage policy that defines the cloud assets and applications and what operations those asset and applications can perform.

Details on this publication can be found here: nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-57pt2r1.pdf.

Backup and Recovery Methods

As our cybersecurity industry moves to a software-defined infrastructure using virtualization, cloud infrastructure, and containers, this translates to systems that would have once been backed up not being backed up in a traditional way. Instead, the code that defines them is backed up, as well as the key data that they are intended to provide or to access. This changes the policies and procedures for server and backup administrators, and it means that habits around how backup storage is accomplished and maintained are changing for backup and recovery. Reviewing organizational backup habits to see if they match new frameworks or if current procedures are failing disaster recovery tests is an important element in planning.

For on-premise systems, some organizations choose to utilize physical storage either at a site they own and operate or via a third-party service that specializes in storing secure backups in environmentally controlled facilities, or they choose cloud-based offsite storage for their backup media. Offsite storage is a form of geographic diversity and helps to ensure that a single disaster cannot destroy an organization's data entirely. This is done both physically and in the cloud for BCDR. For geographic diversity, distance considerations are important to ensure that a single regional disaster is unlikely to harm the offsite storage. BCDR plans define the processes and procedures that an organization will take when a disaster occurs, which is equally important when those assets are in the cloud.

In a BCDR plan, processes describe all of the documented, procedural “how-tos” of the organization's way of conducting operations. Some processes are so routine and so ingrained in the minds of those performing them that they hardly seem necessary to document. On the other end of the spectrum, some processes must be documented because they are performed only under extraordinary circumstances, perhaps even in times of crisis, such as, for example, during disaster recovery.

Preparation for an incident or disaster includes building a team, putting policies and procedures in place, conducting exercises, and building the technical and information-gathering infrastructure that will support incident response needs. These plans cannot exist in a vacuum. Instead, they are accompanied by communications and stakeholder management plans, as well as other detailed response processes unique to each organization.

Infrastructure vs. Serverless Computing

As mentioned earlier in this chapter, infrastructure as a service (IaaS) is a type of cloud computing model that offers essential computation, storage, and networking on demand. Migrating an organization to an IaaS model helps an enterprise reduce the number of physical datacenters needed and gives an organization a great deal of flexibility to spin resources up and down as needed. Serverless computing is a type of infrastructure as a service with a slightly different strategy.

In serverless computing, you don't worry about the infrastructure and configuration; everything is managed by the cloud provider. This means that cloud customers are paying for the quantity of times their code runs on a serverless service. This enables developers to build applications faster while the cloud service automatically handles all tasks required to run the code. Some serverless offerings are workflows, Kubernetes, and application environments where the back and front ends are fully hosted.

Software that is written for application virtual machines allows the developer to create one version of the application so that it can be run on any virtual machine and won't have to be rewritten for every different computer hardware platform. Java Virtual Machine (JVM) is an example of such application virtualization.

Software-Defined Networking

Software-defined networking (SDN) is a technology that allows network professionals to virtualize the network so that control is decoupled from hardware and given to a software application called a controller.

In a typical network environment, hardware devices such as switches make forwarding decisions so that when a frame enters the switch, the switch's logic, built into the content addressable memory (CAM) table, determines the port to which the data frame is forwarded. All packets with the same address will be forwarded to the same destination. SDN is a step in the evolution toward programmable and active networking in that it gives network managers the flexibility to configure, manage, and optimize network resources dynamically by centralizing the network state in the control layer.

Software-defined networking allows networking professionals to respond to the dynamic needs of modern networks. With SDN, a network administrator can shape traffic from a centralized control console without having to touch individual switches. Based on demand and network needs, the network switch's rules can be changed dynamically as needed, permitting the blocking, allowing, or prioritizing of specific types of data frames with a very granular level of control. This enables the network to be treated as a logical or virtual entity.

SDN is defined as three layers: application, control, and the infrastructure or data plane layer. At the core of SDN is the OpenFlow standard. OpenFlow is defined by the Open Networking Foundation (ONF). OpenFlow provides an interface between the controller and the physical network infrastructure layers of SDN architecture. This design helps SDN achieve the following, all of which are limitations of standard networking:

  • Ability to manage the forwarding of frames/packets and applying policy
  • Ability to perform this at scale in a dynamic fashion
  • Ability to be programmed
  • Visibility and manageability through centralized control

Misconfigurations

The definition of misconfiguration is to configure a system incorrectly. Cloud misconfiguration seems avoidable, but according to the IBM Security Cost of a Data Breach Report in 2021, the cost of a breach is $4.24 million, and two-thirds of cloud breaches can be traced to misconfiguration, specifically cloud application programming interfaces.

The cloud has many settings, assets, services, resources, and policies, and that makes it an environment that is difficult to set up correctly. It is even more true for organizations that have had to migrate quickly to the cloud for remote work with an IT department that does not fully understand the details of configuration. Misconfiguration is one of the leading causes of financial damage done to enterprise as well as governmental organizations.

Cloud providers like AWS have a service called Cloud Conformity Checks. These are rules run against the customer's configuration or infrastructure. These scans will take a rule, run it against a system, and determine whether it was successful. According to AWS, the top service scanned in 2021 for misconfiguration was Amazon Elastic Compute Cloud, better known as EC2. The rule most broken in 2021 was AWS CloudTrail Configuration Changes. CloudTrail is a service that enables governance, compliance, and auditing and keeps an organization in compliance with APRA, MAS, and NIST4.

With this tool, you can log, continuously monitor, and retain account activity related to actions across the AWS infrastructure, providing event history of AWS account activity, including actions taken through the Management Console, command-line interface (CLI), AWS SDKs, and APIs. This event history feature simplifies security auditing, resource change tracking, and troubleshooting. You can identify who or what took which action, what resources were acted upon, when an event occurred, and other details that can help you analyze and respond to any activity within your account.

Collaboration Tools

According to the International Engineering Consortium, unified communications and collaboration is an industry term “used to describe all forms of call and multimedia/cross-media message-management functions controlled by an individual user for both business and social purposes.” This topic is of concern to the CASP+ because communication systems form the backbone of any company. Communication systems can include any enterprise process that allows people to communicate.

Web Conferencing

Web conferencing is a low-cost method that allows people in different locations to communicate over the Internet. While useful, web conferencing can potentially be sniffed and intercepted by an attacker. Attackers inject themselves into the stream between the web conferencing clients. This could be accomplished with tools such as Ettercap or Cain & Abel, and then an attacker starts to capture the video traffic with a tool such as UCSniff or VideoSnarf. These tools allow the attacker to eavesdrop on video traffic. Most of these tools are surprisingly easy to use in that you capture and load the web conferencing libpcap-based file (with the .pcap extension) and then watch and listen to the playback. Exercise 9.6 shows you how to perform a basic web conference capture.

Videoconferencing

Today, many businesses make use of videoconferencing systems. Videoconferencing is a great way for businesses to conduct meetings with customers, employees, and potential clients. If videoconferencing systems are not properly secured, there is the possibility that sensitive information could be leaked, and considering how much of the global workforce is working and schooling from home, this could be a big risk. Most laptops and even some desktop systems come with webcams, and there are a host of programs available that will allow an attacker to turn on a camera to spy on an individual. Some of the programs are legitimate, while others are types of malware and Trojan horses designed specifically to spy on users. One example is gh0st Rat. This Trojan was designed to turn on the webcam, record audio, and enable built-in internal microphones to spy on people. You can read more about this malware here: attack.mitre.org/software/S0032.

To prevent these types of problems, you should instruct users to take care when opening attachments from unknown recipients or installing unknown software and emphasize the importance of having up-to-date antivirus. Also, all conference calls should require strong passcodes to join a meeting, and the passcodes for periodic meetings should be changed for each meeting.

Audio Conferencing

When videoconferencing, a user often has the obvious indication that conferencing is still ongoing; that is, they see the screen of their conferenced co-workers. Such is not the case with audio conferencing. When sharing an open line, for example on a telephone, an employee can easily forget that all ambient noise will be heard by all of the conference attendees.

Instant Messaging

Instant messaging (IM) has been around a long time and evolved into modern corporate landscapes in tools like Microsoft Teams, Slack, and Discord Server. It is widely used and available in many home and corporate settings. What has made IM so popular is that it differs from email in that it allows two-way communication in near real time. It also lets business users collaborate, hold informal chat meetings, and share files and information. Although some IM platforms have added encryption, central logging, and user access controls for corporate clients, others operate without such controls.

From the perspective of the CASP+, IM is a concern due to its potential to be a carrier for malware. IM products are all highly vulnerable to malware, such as worm viruses, backdoor Trojan horses, hijacking and impersonation, and denial of service. IM can also be used to send sensitive information. Most of this is because of the file transfer and peer-to-peer file sharing capabilities available to users of these applications. Should you decide to use IM in your organization, there are some basic questions that you need to address:

  • Is the IM solution a critical business requirement?
  • What IM product will be used? Is it just one or will multiple applications be permitted?
  • Will encryption be used?
  • Is IM just for internal use?
  • Will IM be used for external clients?
  • Is the company subject to regulatory compliance requirements for IM? If so, how will data be logged and recorded?
  • Will users be allowed to transfer files and applications?
  • Will virus scanning, file scanning, and content-filtering applications be used?
  • How many employees will use the system over the next 24 to 36 months?
  • Will the IM application be available to everyone or only to specific users?
  • Will the IM solutions use filters on specific words to flag for profanity or inappropriate content?
  • Will there be user training for secure use of IM?

Desktop and Application Sharing

Desktop sharing software is nothing new. Some early examples of desktop sharing programs were actually classified as malware. One such program is Back Orifice (BO), released in 1998. Although many other remote Trojan programs have been created, such as NetBus and Poison Ivy, BO was one of the first to have the ability to function as a remote system administration tool. It enables a user to control a computer running the Windows operating system from a remote location. Although some may have found this functionality useful, there are other functions built into BO that made it much more malicious. BO has the ability to hide itself from users of the system, flip the images on their screens upside down, capture their keystrokes, and even turn on their webcams. BO can also be installed without user interaction and distributed as a Trojan horse.

Desktop sharing programs are extremely useful, but there are potential risks. One issue is that anyone who can connect and use your desktop to execute or run programs on your computer. A search on the Web for Microsoft Remote Desktop Services returns a list of hundreds of systems to which you can potentially connect if you can guess the username and password.

At a minimum, these ports and applications and related ports should be blocked and restricted to those individuals who have a need for this service. Advertising this service on the Web is also not a good idea. If this is a public link, it should not be indexed by search engines. There should also be a warning banner on the page that states that the service is for authorized users only and that all activity is logged.

Another issue with desktop sharing is the potential risk from the user's point of view. If the user shares the desktop during a videoconference, then others in the conference can see what is on the presenter's desktop. Should there be a folder titled “why I hate my boss,” everyone will see it.

Application sharing is fraught with risks as well. If the desktop sharing user then opens an application such as email or a web browser before the session is truly terminated, anybody still in the meeting can read and/or see what's been opened. Any such incident looks highly unprofessional and can sink a business deal.

Table 9.2 lists some programs and default port numbers to be aware of.

TABLE 9.2 Legitimate and malicious desktop sharing programs

NameProtocolDefault Port
Back OrificeUDP31337
Back Orifice 2000TCP/UDP54320/54321
BeastTCP6666
Citrix ICATCP/UDP1494
LokiICMPNA
Masters ParadiseTCP40421/40422/40426
Remote Desktop ControlTCP/UDP49608/49609
NetBusTCP12345
NetcatTCP/UDPAny
ReachoutTCP43188
Remotely AnywhereTCP2000/2001
RemoteTCP/UDP135-139
TimbuktuTCP/UDP407
VNCTCP/UDP5800/5801

Remote Assistance

Remote assistance programs can be used to provide temporary control of a remote computer over a network or the Internet to resolve issues or for troubleshooting purposes. These tools are useful because they allow problems to be addressed remotely and can cut down on the site visits that a technician performs.

Presence

Presence is an Apple software product that is somewhat similar to Windows Remote Desktop. Presence gives users access to their Mac's files wherever they are. It also allows users to share files and data between a Mac, iPhone, and iPad.

Domain Bridging

Nominally, a device operates in one network, viewing traffic intended for that network domain. However, when the device is connected via remote assistance software or a virtual private network connection to a corporate network, it is conceivable that device can bridge two network domains. Unauthorized domain bridging is a security concern with which the CASP+ needs to be familiar.

Email

Many individuals would agree that email is one of the greatest inventions to come out of the development of the Internet. It is the most used Internet application. Just take a look around the office and see how many people use Android phones, iPhones/iPads, tablets, and other devices that provide email services. Email provides individuals with the ability to communicate electronically through the Internet or a data communications network.

Although email has many great features and provides a level of communication previously not possible, it's not without its problems. Now, before we beat it up too much, you must keep in mind that email was designed in a different era. Decades ago, security was not as much of a driving issue as usability. By default, email sends information via clear text, so it is susceptible to eavesdropping and interception. Email can be easily spoofed so that the true identity of the sender may be masked. Email is also a major conduit for spam, phishing, and viruses. Spam is unsolicited bulk mail. Studies by Symantec and others have found that spam is much more malicious than in the past. Although a large amount of spam is used to peddle fake drugs, counterfeit software, and fake designer goods, it's more targeted to inserting malware via malicious URLs today.

As for functionality, email operates by means of several underlying services, which can include the following:

  • Simple Mail Transfer Protocol Simple Mail Transfer Protocol (SMTP) is used to send mail and relay mail to other SMTP mail servers and uses port 25 by default.
  • Post Office Protocol Post Office Protocol (POP3), the current version, is widely used to retrieve messages from a mail server. POP3 performs authentication in clear text on port 110.
  • Internet Message Access Protocol Internet Message Access Protocol (IMAP) can be used as a replacement for POP3 and offers advantages over POP3 for mobile users. IMAP has the ability to work with mail remotely and uses port 143.

Basic email operation consists of the SMTP service being used to send messages to the mail server. To retrieve mail, the client application, such as Outlook, may use either POP or IMAP. Exercise 9.7 shows how to capture clear-text email for review and reinforces the importance of protecting email with PGP, SSL/TLS, or other encryption methods.

The CASP+ should work to secure email and make users aware of the risks. Users should be prohibited by policy and trained not to send sensitive information by clear-text email. If an organization has policies that allow email to be used for sensitive information, encryption should be mandatory.

Several solutions exist to meet this need. One is Pretty Good Privacy (PGP). Other options include link encryption or secure email standards such as Secure Multipurpose Internet Mail Extensions (S/MIME) or Privacy Enhanced Mail (PEM).

Telephony

Businesses with legacy PBX and traditional telephony systems are especially vulnerable to attack and misuse. One of the primary telephony threats has to do with systems with default passwords. If PBX systems are not secured, an attacker can attempt to call into the system and connect using the default password. Default passwords may be numbers such as 1, 2, 3, 4, or 0, 0, 0, 0. An attacker who can access the system via the default password can change the prompt on the voice mailbox account to “Yes, I will accept the charges.” The phone hacker then places a collect call to the number that has been hacked. When the operator asks about accepting charges, the “Yes” is heard and the call completes. These types of attacks are typically not detected until the phone bill arrives or the phone company calls to report unusual activity. Targets of this attack tend to be toll-free customer service lines or other companies that may not notice this activity during holidays or weekends.

A CASP+ should understand that the best defense against this type of attack is to change the phone system's default passwords. Employees should also be prompted to change their voicemail passwords periodically. When employees leave (are laid off, resign, retire, or are fired), their phones should be forwarded to another user, and their voicemail accounts should be immediately deleted.

VoIP

Once upon a time, a network engineer was asked to run data over existing voice lines. Years later, another company asked him what he thought about running voice over existing data lines. This is the basis of VoIP. VoIP adds functionality and reduces costs for businesses, as it allows the sharing of existing data lines. This approach is typically referred to as convergence—or as triple play when video is included.

Before VoIP, voice was usually sent over the circuit-switched public switched telephone network (PSTN). These calls were then bundled by the phone carrier and sent over a dedicated communications path. As long as the conversation continued, no one else could use the same fixed path.

VoIP changes this because VoIP networks are basically packet-switched networks that utilize shared communications paths easily accessible by multiple users. Since this network is accessible by multiple users, an attacker can attempt to launch an on-path attack. An on-path attack allows an attacker to sit between the caller and the receiver and sniff the voice data, modify it, and record it for later review. Sniffing is the act of capturing VoIP traffic and replaying it to eavesdrop on a conversation. Sophisticated tools are not required for this activity. Easily available tools such as Cain & Abel (www.oxid.it) make this possible. Expensive, specialized equipment is not needed to intercept unsecured VoIP traffic.

Exercise 9.8 demonstrates how Cain & Abel can be used to sniff VoIP traffic. It's also worth mentioning that if network equipment is accessible, an attacker can use Switched Port Analyzer (SPAN) to replicate a port on a switch and gain access to trunked VoIP traffic. It's important that the CASP+ understand the importance of placing physical controls so that attackers cannot get access to network equipment.

Although VoIP uses TCP in some cases for caller setup and signaling, denial of service (DoS) is a risk. VoIP relies on some UDP ports for communication. UDP can be more susceptible to DoS than TCP-based services. An attacker might attempt to flood communication pathways with unnecessary data, thus preventing any data from moving on the network. Using a traditional PSTN voice communication model would mean that even if the data network is disabled, the company could still communicate via voice. With convergence, a DoS attack has the potential to disrupt both the IP phones and the computer network.

Yet another more recent inclusion into VoIP vulnerabilities was demonstrated at DEF CON when the presenters demonstrated that VoIP could be used as a command-and-control (C&C) mechanism for botnets. Basically, infected systems can host or dial into a conference call in order to perform a wide range of tasks, such as specifying what systems will participate in a distributed DoS (DDoS) attack, downloading new malware, or using the botnet for the exfiltration of data. This poses data loss prevention questions, to say the least. Here are some basic best practices that can be used for VoIP security:

  • Enforce strong authentication.
  • Implement restrictive access controls.
  • Disable any and all unnecessary services and ports.
  • Encrypt all VoIP traffic so that attackers can't easily listen in on conversations.
  • Deploy firewalls and IDSs.
  • Keep systems and devices patched and updated.
VoIP Implementation

VoIP is a replacement for the PSTN of the past. PSTN is composed of companies such as AT&T, Verizon, and smaller, localized companies still managing the lines and other public circuit-switched telephone networks. These traditional phone networks consisted of telephone lines, fiber-optic cables, microwave transmission links, and so forth that were interconnected and allowed any telephone in the world to communicate with any other.

The equipment involved was highly specialized and may have been proprietary to the telecommunications carrier, which made it much harder to attack. After all, traditional telephones were only designed to make and receive calls.

VoIP Softphones

VoIP softphones can be a single application on a computer, laptop, tablet computer, or smartphone. A VoIP softphone resides on a system that has many different uses. A softphone opens another potential hole in the computer network or host that an attacker can exploit as an entry point. Hardware devices have advantages over software (softphones).

Hardware-Based VoIP

Hardware-based VoIP phones look like typical phones but are connected to the data network instead of PSTN. These devices should be viewed as embedded computers that can be used for other purposes.

A well-designed VoIP implementation requires the CASP+ to consider the design of the network and to segregate services. Using technologies like a virtual local area network (VLAN), the CASP+ can segregate data traffic from voice traffic; however, convergence is making this task much harder. Implementing VLANs correctly can drastically reduce and often eliminate the potential for sniffing attacks that utilize automated tools such as those referenced earlier, as well as many other tools that focus on this type of attack exclusively, regardless of hardware- or software-based phones. One such tool is Voice Over Misconfigured Internet Telephones (VOMIT), which deciphers any voice traffic on the same VLAN or any VLANs that it can access.

Another implementation concern is quality of service (QoS). Although no one may notice if email arrives a few seconds later, voice does not have that luxury. Fortunately, segmentation via VLANs can assist with remedying this kind of issue as well. Here are some QoS examples:

  • Jitter is the variation in transmission latency that can cause packet loss and degraded VoIP call quality.
  • Latency is a delay in the transmission of a data packet.

Before VoIP systems are implemented, a CASP+ must explore techniques to mitigate risk by limiting exposures of data networks from spreading to voice networks. VoIP equipment, gateways, and servers tend to use open standards based on RFCs and open protocols. This also allows an attacker to have a better understanding of the equipment and technology. If that is not enough, most of the vendors place large amounts of product information on their websites. This aids the attackers in ramping up their knowledge very quickly.

Bit Splitting

Mentioned earlier in the chapter, bit splitting is another technique for securing data over a computer network that involves encrypting data, splitting the encrypted data into smaller data units, distributing those smaller units to different storage locations, and then further encrypting the data at its new location. Data is protected from security breaches, because even if an attacker is able to retrieve and decrypt one data unit, the information would be useless unless it can be combined with decrypted data units from the other locations.

Data Dispersion

Data dispersion consists of information being distributed and stored in multiple cloud pods, which is a key component of cloud storage architecture. The ability to have data replicated throughout a distributed storage infrastructure is critical. This allows a cloud service provider to offer storage services based on the level of the user's subscription or the popularity of the item.

Summary

In this chapter, we examined the advantages and disadvantages of virtualization and cloud computing as well as the issues that they bring to enterprise security.

Cloud and virtualized computing has become the way of the future. So many advances in computing have brought about more changes than just virtualization, including network storage and cloud computing. Cloud computing changed the concept of traditional network boundaries by placing assets outside the organization's perimeter and control.

The idea of cloud computing represents a shift in thought as well as trust. The cloud service can be fully managed by the cloud provider. Consumers use the service at a rate that is set by their particular needs. Cost and ease of use are two great benefits of cloud computing, but you must consider significant security concerns when contemplating moving critical applications and sensitive data to public and shared cloud environments. To address these concerns, the cloud provider must develop sufficient controls to provide the same or greater level of security than the organization would have if the cloud was not used, and organizations will continue to evolve as new technologies become available.

Exam Essentials

Be able to explain virtualization strategies. After this chapter, you should be able to explain the difference between hypervisor types, containers, and emulation, choosing the best model for different situations.

Understand the different cloud models. The CASP+ professional should be able to explain the difference between the different cloud service models, hosting models, and the considerations of deployment including resources, protection, location, and cost.

Be able to explain cloud technology adoption and security. Understand how adoption of cloud technologies affects the entire organization from automation to encryption to logging and monitoring.

Be able to choose the correct backup and recovery methods. After this chapter, you should be able to understand the repercussions of the cloud as business continuity and disaster recovery as well as primary and alternative providers of backups.

Review Questions

You can find the answers in Appendix.

  1. You are setting up a new virtual machine. What type of virtualization should you use to coordinate instructions directly to the CPU?
    1. Type B.
    2. Type 1.
    3. Type 2.
    4. No VM directly sends instructions to the CPU.
  2. Your DevOps team decided to use containers because they allow running applications on any hardware. What is the first thing your team should do to have a secure container environment?
    1. Install IPS.
    2. Lock down Kubernetes and monitor registries.
    3. Configure antimalware and traffic filtering.
    4. Disable services that are not required and install monitoring tools.
  3. You work in information security for a stock trading organization. You have been tasked with reducing cost and managing employee workstations. One of the biggest concerns is how to prevent employees from copying data to any external storage. Which of the following best manages this situation?
    1. Move all operations to the cloud and disable VPN.
    2. Implement server virtualization and move critical applications to the server.
    3. Use VDI and disable hardware and storage mapping from a thin client.
    4. Encrypt all sensitive data at rest and in transit.
  4. You are exploring the best option for your team to read data that was written onto storage material by a device you do not have access to, and the backup device has been broken. Which of the following is the best option for this?
    1. Type 1 hypervisor
    2. Type 2 hypervisor
    3. Emulation
    4. PaaS
  5. You are a security architect building out a new hardware-based VM. Which of the following would least likely threaten your new virtualized environment?
    1. Patching and maintenance
    2. VM sprawl
    3. Oversight and responsibility
    4. Faster provisioning and disaster recovery
  6. Management of your hosted application environment requires end-to-end visibility and a high-end performance connection while monitoring for security issues. What should you consider for the most control and visibility?
    1. You should consider a provider with connections from your location directly into the applications cloud resources.
    2. You should have a private T1 line installed for this access.
    3. You should secure a VPN concentrator for this task.
    4. You should use HTTPS.
  7. As the IT director of a nonprofit agency, you have been challenged at a local conference to provide technical cloud infrastructure that will be shared between several organizations like yours. Which is the best cloud partnership to form?
    1. Private cloud
    2. Public cloud
    3. Hybrid cloud
    4. Community cloud
  8. Your objectives and key results (OKRs) being measured for this quarter include realizing the benefits of a single-tenancy cloud architecture. Which one of these results is a benefit of a single-tenancy cloud service?
    1. Security and cost
    2. Reliability and scaling
    3. Ease of restoration
    4. Maintenance
  9. With 80 percent of your enterprise in a VPC model, which of the following is not a key enabling technology?
    1. Fast WAN and automatic IP addressing
    2. High-performance hardware
    3. Inexpensive servers
    4. Complete control over process
  10. You have a new security policy that requires backing up a database offsite specifically for redundancy. This data must be backed up every 24 hours. Cost is important. What method are you most likely to deploy?
    1. File storage
    2. Electronic vaulting
    3. Block storage
    4. Object storage
  11. A software startup hired Pamela to provide expertise on data security. Clients are concerned about confidentiality. If confidentiality is stressed more than availability and integrity, which of the following scenarios is BEST suited for the client?
    1. Virtual servers in highly available environment. Clients will use redundant virtual storage and remote desktop services to access software.
    2. Virtual servers in highly available environment. Clients will use single virtual storage and remote desktop services to access software.
    3. Clients are assigned virtual hosts running on shared hardware. Physical storage is partitioned with block cipher encryption.
    4. Clients are assigned virtual hosts running shared hardware. Virtual storage is partitioned with streaming cipher encryption.
  12. Your company decided to outsource certain computing jobs that need a large amount of processing power in a short duration of time. You suggest the solution of using a cloud provider that enables the company to avoid a large purchase of computing equipment. Which of the following is your biggest concern with on-demand provisioning?
    1. Excessive charges if deprovisioning fails
    2. Exposure of intellectual property
    3. Data remanence from previous customers in the cloud
    4. Data remanence of your proprietary data that could be exposed
  13. Pedro has responsibility for the cloud infrastructure in the large construction company he works for. He recorded data that includes security logs, object access, FIM, and other activities that your SIEM often uses to detect unwanted activity. Which of the following BEST describes this collection of data?
    1. Due diligence
    2. Syslog
    3. IDR
    4. Audit trail
  14. Marcus is a remote employee needing to access data in cloud storage. You need to configure his Windows 10 client for a remote access VPN using IPSec/L2TP. Why is a VPN so important for remote employees?
    1. VPN traffic is accessible.
    2. VPN traffic is encrypted.
    3. A VPN allows you remote access.
    4. A VPN is an option if you are on your home network.
  15. Dana has a three-layer line of defense working to protect remote access to his network and cloud applications, including a firewall, antivirus software, and VPN. What action should your network security team take after standing up this defense?
    1. Log all security transactions.
    2. Monitor alerts from these assets.
    3. Check the firewall configuration monthly and antivirus weekly.
    4. Run tests for VPN connectivity once every 24 hours.
  16. Your CIO approached the CISO with the idea to configure IPSec VPNs for data authentication, integrity, and confidentiality for cloud assets. Which of the following reasons would help support the CIO's goals?
    1. IPSec only supports site-to-site VPN configurations.
    2. IPSec can only be deployed with IPv6.
    3. IPSec authenticates clients against a Windows server.
    4. IPSec uses secure key exchange and key management.
  17. Frederick's company is migrating key systems from on-premise systems to a virtual data center in the cloud managed by a third party. Remote access must be available at all times. Access controls must be auditable. Which of these controls BEST suits these needs?
    1. Access is captured in event logs.
    2. Access is limited to single sign-on.
    3. Access is configured using SSH.
    4. Access is restricted using port security.
  18. Your business cannot overlook the need for allowing employees to have remote access and collaboration tools. You never know when an employee will need to connect to the corporate intranet from a remote location. The first thing to do is create a comprehensive network security policy. Which one of these will not fit into that policy?
    1. Definition of the classes of users and their levels of access
    2. Identification of what devices are allowed to connect through a VPN
    3. The maximum idle time before automatic termination
    4. Allow list ports and protocols necessary to everyday tasks
  19. You work for the power company that supplies electricity to three states. You rely heavily on the data you collect and that is replicated in the cloud. Data is split into numerous arrays, and a mapper processes them to certain cloud storage options. What is this process called?
    1. Encryption
    2. Data dispersion
    3. Bit splitting
    4. Perimeter security
  20. Jonathan is a cloud security sales consultant working for a cloud access security broker (CASB) company. His organization is advocating applying the highest level of protection across all your cloud assets. You suggest this is not what the priority should be. What would be a more strategic priority?
    1. Determining what to protect through data discovery and classification
    2. Running anti-malware software on all cloud instances
    3. Using vulnerability scanning software on mission-critical servers
    4. Implementing threat mitigation strategies
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.188.36.239