This chapter discusses securing virtualized, distributed, and shared computing. Virtualized computing has come a long way in the last 20 years, and it can be found everywhere today, from major businesses to small office, home office (SOHO) computing environments. Advances in computing have brought about more changes than just virtualization, including network storage and cloud computing. Cloud computing changed the concept of traditional network boundaries by placing assets outside the organization's perimeter.
In this chapter, we'll look at both the advantages and the disadvantages of virtualization and cloud computing as well as the concerns that they raise for enterprise security.
A question that increasingly concerns security professionals is who has the data. With the rise of cloud computing, network boundaries are much harder to define. A network boundary is the point at which your control ends, and cloud computing does away with the typical network boundary. This fluid elasticity and scalability of a network boundary creates a huge impact because historically this demarcation line was at the edge of the physical network, a point at which the firewall is typically found.
The concept of cloud computing represents a shift in thought in that end users do not know the details of a specific technology. The service can be fully managed by the provider, and cloud consumers can use the service at a rate that is set by their particular needs. Cost and ease of use are two great benefits of cloud computing, but you must consider significant security concerns when contemplating moving critical applications and sensitive data to public and shared cloud environments. To address these concerns, the cloud provider must develop sufficient controls to provide the same or a greater level of security than the organization would have if the cloud was not used.
Cloud computing is not the only way in which network boundaries are changing. Telecommunicating and outsourcing have altered network boundaries. Telecommunicating allows employees to work from home and avoid the drive to the office. The work-from-home (WFH) model adopted during the COVID-19 pandemic affected healthcare, IT, education, nonprofit, sales, and marketing sectors, and those are just some of the industries to allow telecommuting.
Cloud computing can include virtual servers, services, applications, or anything you consume over the Internet. Cloud computing gets its name from the drawings typically used to describe the Internet. It is a modern concept that seeks to redefine consumption and delivery models for IT services. In a cloud computing environment, the end user may not know the location or details of a specific technology; it can be fully managed by the cloud service. Cloud computing offers users the ability to increase capacity or add services as needed without investing in new datacenters, training new personnel, or maybe even licensing new software. This on-demand, or elastic, service can be added, upgraded, and provided at any time.
Virtualization is a technology that system administrators have been using in datacenters for many years, and it is at the heart of cloud computing infrastructure. It is a technology that allows the physical resources of a computer (CPU, RAM, hard disk, graphics card, etc.) to be shared by virtual machines (VMs). Consider the old days when a single physical hardware platform—the server—was dedicated to a single-server application like being a web server. It turns out that a typical web server application didn't utilize many of the underlying hardware resources available. If you assume that a web application running on a physical server utilizes 30 percent of the hardware resources, that means that 70 percent of the physical resources are going unused, and the server is being wasted.
With virtualization, if three web servers are running via VMs with each utilizing 30 percent of the physical hardware resources of the server, 90 percent of the physical hardware resources of the server are being utilized. This is a much better return on hardware investment. By installing virtualization software on your computer, you can create VMs that can be used to work in many situations with many different applications.
A hypervisor is the software that is installed on a computer that supports virtualization. It can be implemented as firmware, which is specialized hardware that has permanent software programmed into it. It could also be hardware with installed software. It is within the hypervisor that the VMs will be created. The hypervisor allocates the underlying hardware resources to the VMs. Examples of hypervisor software are VMware's Workstation and Oracle's VM VirtualBox. There are free versions of each of these hypervisors that you can download and use. A VM is a virtualized computer that executes programs as a physical machine would.
A virtual server enables the user to run two, three, four, or more operating systems on one physical computer. For example, a virtual machine will let you run a Windows, Linux, or virtually any other operating system. They can be used for development, system administration, or production to reduce the number of physical devices needed. Exercise 9.1 shows how to convert a physical computer into a virtual image.
Virtualization sprawl is a common issue enterprise organizations have. Virtualization or VM sprawl happens when the number of machines on a network exceeds the point where system administrators can handle or manage them correctly or efficiently. To keep this from happening, strict policies should be developed and adhered to as well as using automation to stay on top of resources being used. Creating a VM library is helpful as long as you have a VM librarian to go with it.
Virtual servers can reside on a virtual emulation of the hardware layer. Using this virtualization technique, the guest has no knowledge of the host's operating system. Virtualized servers make use of a hypervisor too.
Hypervisors are classified as either Type 1 (I) or Type 2 (II). Type 1 hypervisor systems do not need an underlying OS, while Type 2 hypervisor systems do. A Type 1 hypervisor runs directly on the bare metal of a system. A Type 2 hypervisor runs on a host operating system that provides virtualization services. It will have its own operating system and be allocated physical hardware resources such as CPU, RAM, and hard disk, as well as network resources.
The host operating system is the operating system of the computer the hypervisor is being installed on. The guest operating system is the operating system of the VM that resides within the hypervisor.
The hypervisor validates all of the guest-issued CPU instructions and manages any executed code that requires additional privileges. VMware and Microsoft Hyper-V both use the hypervisor, which is also known as a virtual machine monitor (VMM). The hypervisor is the foundation of this type of virtualization; it accomplishes the following:
Just like any environment, each hypervisor has its pros and cons. Some of the pros of running a VM are that you can run more than one OS at a time; you can install, reinstall, snapshot, roll back, or back up any time you want quite easily; and you manage the allocation of resources. The cons would be that performance may not be as robust as if you were on bare metal. USB and external hard drives can cause major issues, and some of us would rather roll back an image rather than take the time to troubleshoot an issue.
Modern computer systems have come a long way in how they process, store, and access information. Virtual memory is the combination of the computer's primary memory (RAM) and secondary storage. When these two technologies are combined, the OS lets application programs function as if they have access to more physical memory than what is actually available to them. Virtualization types can include the following:
Technologies related to virtual systems continue to evolve. In some cases, you may not need an entire virtual system to complete a specific task. In such situations, a container can now be used. Containers allow for the isolation of applications running on a server. Containers offer a lower-cost alternative to using virtualization to run isolated applications on a single host. When a container is used, the OS kernel provides process isolation and performs resource management. Determining when to use containers instead of virtualizing the OS mostly breaks down to the type of workload you have to complete. Containers allow for applications to be deployed faster and support accelerated development. Modern container technology was popularized by Docker in 2013. Since then, Google introduced the container organization platform Kubernetes. Other vendors include VMware Tanzu, Microsoft Azure Kubernetes Service, and Amazon Elastic Container Service.
Virtual machines and containers have many layers of implementation. Another method of creating an environment that takes the properties of one system into another is emulation. Emulators allow you to turn your PC into a Mac and play games designed for hardware that was built decades ago. Most emulators tend to run slower than the machine they are simulating. Dolphin Emulator is a free and open-source video game console that allows Nintendo GameCube or Wii games to be played on a PC or Android. Parallels is an emulator program that allows you to run Windows on a Mac computer. Application virtualization permits a user to access applications that are not installed on their devices, encapsulating the program from the OS they are executed on. The application experience is the same as if it were present on the end user's computer. The software allows applications to run on a variety of operating systems and web browsers.
Virtualized servers have many advantages. One of the biggest is server consolidation. Virtualization lets you host many virtual machines on one physical server. This reduces deployment time and makes better use of existing resources. Virtualization also helps with research and development. Virtualization allows rapid deployment of new systems and offers the ability to test applications in a controlled environment. Virtual machine snapshots allow for easy image backup before changes are made and thus provide a means to revert to the previous good image quickly. From a security standpoint, you physically have to protect only one physical server where you may have had to protect many servers in the past. This is useful for all types of development testing and production scenarios.
Physical servers may malfunction or experience a hardware failure during important times or when most needed. In these situations, virtualization can be a huge advantage. Virtual systems can be imaged or replicated and moved to another physical computer very quickly. This aids the business continuity process and reduces outage time. Virtualization minimizes physical space requirements and permits the replacement of physical servers with fewer machines.
With every advantage there is usually a drawback, and virtualization is no different. Virtualization adds another layer of complexity. Many books are available that explain how to manage a Microsoft server, but virtualization may result in your having a Microsoft server as a host machine with several Linux and Unix virtual servers or multiple Microsoft systems on a single Linux machine. This new layer of complexity can cause problems that may be difficult to troubleshoot. Vulnerabilities associated with a single physical server hosting multiple companies' virtual machines include the comingling of data. If this happens and a data breach occurs, your data may be affected. There can also be security issues when a single platform is hosting multiple companies' virtual machines. These can include the following:
Virtualization also requires additional skills. Virtualization software and the tools used to work within a virtual environment add an extra burden on administrators because they will need to learn something new. Security disadvantages of virtualizing servers can also be seen in Type 1, Type 2, and container-based systems.
With Type 1 VMs, you manage guests directly from the hypervisor. Any vulnerabilities of the VMs must be patched. With Type 2 VMs, you also have the issue of the underlying OS and any vulnerabilities that it may have. A missed patch or an unsecured base OS could expose the OS, hypervisor, and all VMs to attack. Another real issue with Type 2 VMs is that such systems typically allow shared folders and the migration of information between the host and guest OSs. Sharing data increases the risk of malicious code migrating from one VM to the base system.
Some basic items to review for securing virtual systems include those in Table 9.1.
TABLE 9.1 Common security controls for virtual systems
Item | Comments |
---|---|
Antivirus | Antivirus must be present on the host and all VMs. |
Hardening | All VMs should be hardened so that nonessential services are removed. |
Physical controls | Controls that limit who has access to the datacenter. |
Authentication | Strong access control. |
Resource access | Only administrative accounts as needed. |
Encryption | Use encryption for sensitive data in storage or transit. |
Remote Desktop Services | Restrict when not needed. When it is required, use only 256-bit or higher encryption. |
Remember dumb terminals and the thin client concept? This has evolved into what is known as the virtual desktop infrastructure (VDI). This centralized desktop solution uses servers to serve up a desktop operating system to a host system. Each hosted desktop virtual machine is running an operating system such as Windows 11 or Windows Server 2022. The remote desktop is delivered to the user's endpoint device via Remote Desktop Protocol (RDP), Citrix, or other architecture. Technologies such as RDP are great for remote connectivity, but they can also allow remote access by an attacker.
This system has lots of benefits, such as reduced onsite support and greater centralized management. However, a disadvantage of this solution is that there is a significant investment in hardware and software to build the backend infrastructure.
Tools that can be used for structural planning and construction of enterprise cloud instances for speed and ease of use include middleware and metadata.
Between the operating system and an application, middleware gives some communication functionality to the user, making a connection between any two clients, servers, or databases. Advantages of using middleware include faster deployment of applications in the cloud as well as in containerized environments.
Metadata in the cloud helps organize assets, data, and virtual instances so that it is easier to find, understand, and manage information. Many different metadata tags can be used from a template or created uniquely. Most tags have a field and a type for classification, and the type can be a string, a Boolean, or a date/time. Tags are usually optional unless they are explicitly required. The most important question to ask about metadata and tags is what information your organization wants to keep track of and how that metadata will be used. Metadata can be used for compliance and governance as well as grouping for cost analysis. Fields such as data_owner could be important to one department, while data_confidentiality or storage_location could be important to another department.
Cloud computing architecture can include various cloud deployment models and layers. Public use services are provided by an external provider. Private use services are implemented internally in a cloud design. A hybrid architecture offers a combination of public and private cloud services to accomplish an organization's goals. A community cloud service model is a shared and cooperative infrastructure where several organizations with common concerns share data and resources.
The following is a partial list of the top cloud provider companies:
These providers offer a range of services including the following:
On-demand, or elastic, cloud computing changes the way information and services are consumed and provided. Users can consume services at a rate that is set by their particular needs. Cloud computing offers several benefits, including the following:
Cloud providers have more storage capability that is elastic and has lower costs. These storage locations, for some global cloud providers, are regional in nature and redundant, with layers of security and data backups built in, so if one storage location goes down, the resources, applications, data, and access are still available.
According to the International Data Corporation (IDC):
“The proliferation of devices, compliance, improved system performance, online commerce, and increased replication to secondary or backup sites is contributing to an annual doubling of the amount of information transmitted over the Internet.”
What this means is that we are now dealing with much more data than in the past. Servers sometimes strain under the load of stored and accessed data. The cost of dealing with large amounts of data is something that all companies must address.
There are also increased economic pressures to stay competitive. Companies are looking at cost-saving measures. Cloud computing provides much greater flexibility than previous computing models, but the danger is that the customer must perform due diligence.
The benefits of cloud computing are many. One of the real advantages of cloud computing is the ability to use someone else's storage. Another advantage is that when new resources are needed, the cloud can be leveraged, and the new resources may be implemented faster than if they were hosted locally at your company. With cloud computing, you pay as you go. Another benefit is the portability of the application. Users can access data from work, from home, or at client locations. There is also the ability of cloud computing to free up IT workers who may have been tied up performing updates, installing patches, or providing application support. The bottom line is that all of these reasons lead to reduced capital expense, which is what all companies are seeking. In Exercise 9.2 you will examine the benefits of cloud computing.
Cloud models can be broken into several basic designs that include infrastructure as a service, monitoring as a service, software as a service, and platform as a service. Each design is described here:
aws.amazon.com
.www.appdynamics.com
. It provides a Java-based MaaS solution.www.salesforce.com
.workspace.google.com
.With so many different cloud-based services available, it was only a matter of time before security moved to the cloud. Such solutions are known as security as a service (SECaaS). SECaaS is a cloud-based solution that delivers security as a service from the cloud. SECaaS functions without requiring onsite hardware, and as such it avoids substantial capital expenses. The following are some examples of the type of security services that can be performed from the cloud:
www.hashsets.com
. This hash set is maintained by the National Software Reference Library (NSRL). These hashes can be used by law enforcement, government, and industry organizations to review files on a computer by matching file profiles in the database.From a security standpoint, one of the first questions that must be answered in improving the overall security posture of an organization is where data resides. The advances in technology make this much more difficult than in the past. Years ago, Redundant Array of Inexpensive/Independent Disks (RAID) was the standard for data storage and redundancy. Today, companies have moved to dynamic disk pools (DDPs) and cloud storage. DDP shuffles data, parity information, and spare capacity across a pool of drives so that the data is better protected and downtime is reduced. DDPs can be rebuilt up to eight times faster than traditional RAID.
Enterprise storage infrastructures may not have adequate protection mechanisms. The following basic security controls should be implemented:
Cloud computing has many benefits, but there are disadvantages as well, especially with smaller organizations.
Although cost and ease of use are two great benefits of cloud computing, there are significant security concerns when considering on-demand/elastic cloud computing.
Cloud computing is a big change from the way IT services have been delivered and managed in the past. One of the advantages is the elasticity of the cloud, which provides the online illusion of an infinite supply of computing power. Cloud computing places assets outside the owner's security boundary. Historically, items inside the security perimeter were trusted, whereas items outside were not. With cloud computing, an organization is being forced to place their trust in the cloud provider. The cloud provider must develop sufficient controls to provide the same or a greater level of security than the organization would have if the cloud were not used.
As a CASP+, you must be aware of the security concerns of moving to a cloud-based service. The pressures are great to make these changes, but there is always a trade-off between security and usability. Here are some basic questions that a security professional should ask when considering cloud-based solutions and the controls that must be put in place.
Insiders, or those with access, have the means and opportunity to launch an attack and only lack a motive. Anyone considering using the cloud needs to look at who is managing their data and what types of controls are applied to individuals who may have logical or physical access.
In Exercise 9.3, you will examine some common risks and issues associated with cloud computing as they would affect your organization.
Data sovereignty refers to a country's laws and the control that country has over the data that resides within its jurisdiction. A country's data laws could restrict the cross-border transfer of data, imposing legal requirements that may conflict with those of the country in which the user currently resides. Data laws can impose jurisdiction over data that may change as the data is transferred across borders. Legal obligations are different from privacy, data security, and transfer obligations that may apply if the data is hosted within different countries or is controlled by different cloud providers.
There is no known uniform, worldwide regulation that governs the protection of a user's data, but the General Data Protection Regulation (GDPR) comes as close as any standard to meeting this objective so far. Laws of various countries are often different in terms of where the data is stored and where the third-party storage provider is based. For example, a U.S.-based company may opt to store financial data in Ireland or protected health information (PHI) in Germany. As mentioned, the GDPR is an example of a more recent regulation that affects any online organization that collects or processes the personal data of people in the European Union (EU) countries. GDPR is a regulation, very specific to the area of data privacy, and applies externally—outside of Germany, as opposed to, say, a German law. As of May 25, 2018, any such organization must ensure compliance with the GDPR or face substantial penalties. The GDPR is the strongest case of data sovereignty through regulation to date.
To complicate matters further, some countries have laws against overly strong encryption. This can result in complex compliance issues.
When addressing potential data sovereignty issues, corporations can begin the process by analyzing the different technical, legal, and business issues. Corporations should also conduct a detailed analysis of the following:
Computer criminals always follow the money, and as more companies migrate to cloud-based services, look for the criminals to follow. Here are some examples of attacks to which cloud services are vulnerable:
Other kinds of attacks include keyloggers, custom malware sent via phishing (such as malicious PDFs), and trojaned USB keys dropped in the cloud provider employee parking lot. A dedicated attacker who is targeting a big enough cloud provider might even apply for a job at the facility, simply to gain some level of physical access.
All systems have an inherent amount of risk. The goal of the security professional is to evaluate the risk and aid management in deciding on a suitably secure solution. Cloud computing offers real benefits to companies seeking a competitive edge in today's economy. Many more providers are moving into this area, and the competition is driving prices even lower.
Attractive pricing, the ability to free up staff for other duties, and the ability to pay for services as needed will continue to drive more businesses to consider cloud computing. Before any services are moved to the cloud, the organization's senior management should assess the potential risk and understand any threats that may arise from such a decision. One concern is that cloud computing blurs the natural perimeter between the protected inside and the hostile outside. Security of any cloud-based services must be closely reviewed to understand what protections exist for your information. There is also the issue of availability. This availability could be jeopardized by a DoS attack or by the service provider suffering a failure or going out of business. Also, what if the cloud provider goes through a merger? What kind of policy changes occur? What kind of notice is provided in advance of the merger? All of these issues should be covered in the contract.
Unfortunately, one of the largest vulnerabilities in the cloud is simple customer error or misconfiguration. Cloud misconfiguration can be any errors or gaps that leave risk exposure. This risk could be exploited by an attacker or malicious insider, and it doesn't take much technical knowledge to extract data or compromise cloud assets. Security researchers disclosed that a nonprofit organization in Los Angeles exposed more than 3.5 million records including PII because an AWS S3 storage bucket leaked databases of information because they were programmed to be “public and anonymously accessible.” Misconfigured cloud services pose a high security risk, so make sure the people administering your cloud are well trained.
Even though your data is in the cloud, it must physically be located somewhere. Is your data on a separate server, is it co-located with the data of other organizations, or is it sliced and diced so many times that it's hard to know where it resides? Your cloud storage provider should agree in writing to provide the level of security required for your customers.
Tape was the medium of choice for backup and archiving for most businesses for many years. This was in part due to the high cost of moving backup and archival data to a data warehouse. Such activities required hundreds of thousands of dollars in infrastructure investment. Today that has started to change as cloud service providers are beginning to sell attractively priced services for cloud storage. Such technologies allow companies to do away with traditional in-house technologies. Cloud-based archiving and warehousing have several key advantages.
How much storage is enough? How big a hard drive should I buy? These are good questions—there never seems to be enough storage space for home or enterprise users. Businesses are no different and depend on fast, reliable access to information critical to their success. This makes enterprise storage an important component of most modern companies. Enterprise storage can be defined as computer storage designed for large-scale, high-technology environments.
Think of how much data is required for most modern enterprises. There is a huge dependence on information for the business world to survive. Organizations that thrive on large amounts of data include government agencies, credit card companies, airlines, telephone billing systems, global capital markets, e-commerce, and even email archive systems. Although the amount of storage needed continues to climb, there is also the issue of terminology used in the enterprise storage market. Terms such as heterogeneous, SAN, NAS, virtualization, and cloud storage are frequently used.
Before any enterprise storage solution is implemented, a full assessment and classification of the data should occur. This would include an analysis of all threats, vulnerabilities, existing controls, and the potential impact if loss, disclosure, modification, interruption, or destruction of the data should occur.
Now that we've explored some of the security issues of enterprise storage, let's look at some of the technologies used in enterprise storage.
Virtual file sharing services are a second type of virtual storage. These services are not meant for long-term use. They allow users to transfer large files. Examples of these services include Dropbox, DropSend, and MediaFire. These virtual services work well if you are trying to share very large files or move information that is too big to be sent as an attachment.
On the positive side, there are many great uses for these services, such as keeping a synchronized copy of your documents in an online collaboration environment, sharing documents, and synchronizing documents between desktops, laptops, tablets, and smartphones.
The disadvantages of these services include the fact that you are now placing assets outside the perimeter of the organization. There is also the issue of loss of control. If these providers go out of business, what happens to your data? Although these services do fill a gap, they can be used by individuals to move data illicitly. Another concern is the kind of controls placed on your data. Some of these services allow anyone to search sent files.
In Exercise 9.4, you'll look at security issues involved in online storage.
Many NAS devices make use of the Linux OS and provide connectivity via network file sharing protocols. One of the most common protocols used is Network File System (NFS). NFS is a standard designed to share files and applications over a network. NFS was developed by Sun Microsystems (now part of Oracle) back in the mid-1980s. The Windows-based counterpart used for file and application sharing is Common Internet File System (CIFS); it is an open version of Microsoft's Server Message Block (SMB) protocol.
For the CASP+ exam, this is often referred to as object storage or file-based storage. This type of cloud solution is seen in Amazon Simple Storage Services (S3).
For the CASP+, this is referred to in the objectives as block storage. Block cloud storage solutions include Amazon Elastic Block Store (EBS) and are provisioned with ultra-low latency for high performance.
Enterprise storage multipath solutions reduce the risk of data loss or lack of availability by setting up multiple routes between a server and its drives. The multipathing software maintains a list of all requests, passes them through the best possible path, and reroutes communication if one of the paths dies. One of its major advantages is its speed of access.
Data dispersion consists of information being distributed and stored in multiple cloud pods, which is a key component of cloud storage architecture. The ability to have data replicated throughout a distributed storage infrastructure is critical. This allows a cloud service provider to offer storage services based on the level of the user's subscription or the popularity of the item. Bit splitting is another technique for securing data over a computer network that involves encrypting data, splitting the encrypted data into smaller data units, distributing those smaller units to different storage locations, and then further encrypting the data at its new location. Data is protected from security breaches, because even if an attacker is able to retrieve and decrypt one data unit, the information is useless unless it can be combined with decrypted data units from the other locations.
Whether you are storing objects, files, databases, blocks of data, or binary large objects (BLOBs) in the cloud, there are several best practices that help accomplish the safety of your information.
Encryption of sensitive data in the cloud is a vital security step. There are many ways to implement key design with a data store. A data store is a repository for storing and managing a collection of data.
A key/value store associates each data value to a specific and unique key. To modify a value, the key is overwritten using an application that replaces the entire value. While a single key/value is extremely scalable, it can also distribute that key across multiple instances. Amazon DynamoDB is probably the most well-known key/value store. A key-value pair consists of two related pieces of data. The key is a constant, such as color, and a value, such as an article of clothing, which belongs to that set of data. A fully formed key-value pair could be the “color=green, clothing=shirt” pair. In addition to cloud-based storage sites, other storage types pose security and privacy concerns. The actual risks depend largely on whether storage is nonremovable or removable. USB On-The-Go (USB OTG) is the solution to the problem of not being able to connect a standard USB flash drive directly to a mobile device. USB OTG is flash drive storage with a physical interface capable of attaching to almost every smartphone or small form-factor device. The risk of misplacing this portable storage is based on how private or critical the information stored on it is.
Other removable storage, such a swappable drive from a larger device, also carries the risk of being easily misplaced. Most likely, the removable storage would be maliciously stolen from its bay. The malicious person may transfer or send backup data to removable or uncontrolled storage.
In Exercise 9.5, you will use the cloud to store and transfer a large file.
Years ago, many organizations resisted cloud technology adoption because of the lack of control and understanding. There are risks, threats, and vulnerabilities no matter where the data is stored. Moving to the cloud is a big decision, and modern cloud computing has many benefits including increased security, flexibility, and cost savings.
Cloud automation is technology that does not require human intervention in processes and procedures. By having decisions made based on relationships and the actions that should be taken, human-made mistakes can be avoided, and processes that required involvement of IT staff can happen automatically. By using automation of a single task or orchestration of many automated tasks, enterprise organizations improve standard operating procedures for specific use cases as well as increasing efficiency and consistency. There are several cloud orchestration tools including software like Puppet, Ansible, and Chef. These three tools are fairly simple to use and have robust capabilities. Puppet works best with automated provisioning of assets, configuration automation with great visualization, and reporting. Chef is used more for compliance and security management, while Ansible is the easiest of all three to implement; Ansible is good with simple orchestration but does not scale in large environments as well as the other two.
Sensitive information is being moved to the cloud now more than ever. According to the Ponemon Institute, the average cost of a breach is now more than $3.8 million USD. One of the most essential elements of preventing the loss of data being moved to the cloud is having robust encryption for the data while at rest as well as in transit.
Encryption makes data unreadable to anyone without access to the encryption keys. Kerckhoffs's principle states that a cryptosystem should be secure, even if everything about the system, except the key, is public knowledge. Only 1 percent of cloud providers support tenant managed encryption keys. With the right tools and configuration, you can protect data with standards-based AES encryption for data at rest, ensuring compliance with PCI, HIPAA, and other federal or industry requirements.
Cloud encryption solutions can encrypt information as it moves in and out of applications and into storage with strong key-based encryption. Most reputable cloud service providers offer cloud encryption options. The most used type of cloud data in transit encryption is the HTTPS protocol. When using the more modern and secure version of Secure Sockets Layer (SSL) called Transport Layer Security (TLS), all traffic is encoded so only authorized users can access the data. If an unauthorized third party sees the data, it remains unreadable because the digital keys to lock and unlock it are at the user and destination layers. Keys should be generated and issued using an asymmetrical algorithm between trusted entities, while certificates are certified during the original connection.
Every device on a modern cloud network generates logs. Some logs are human readable, and some logs look like gibberish. Some logs are more useful than others, and we should understand which cloud logs need to be preserved for future analysis and for how long. You don't need to log everything, but what you do log should be purposely collected and managed because the logs can show you who did what activity and how the systems they touched responded.
The Center for Internet Security (CIS) Critical Security Controls Version 8 focuses on the collection, maintenance, monitoring, and analysis of audit logs. Our organizations are evolving quickly, and we have to learn to deal with log data in the big data cloud era. Analyzing audit logs is a vital part of security, not just for system security but for processes and compliance. Part of the process of log analysis is reconciling logs from different sources and correlation even if those devices are in different time zones. Network Time Protocol (NTP) will help synchronize devices using the cloud. Google Cloud has its own NTP protocol called Google Public NTP.
In a basic network topology, you will have many types of devices, including routers, switches, firewalls, servers, and workstations. Each of these devices that helps connect you to the rest of the world will generate logs based on its operating system, configuration, and software. Examining logs is one of the most effective ways of looking for issues and investigating problems happening on a system or in an application.
Synchronization and the ability to correlate the data between these devices are vital to a healthy environment. Attackers can hide their activities on assets if logging is not done correctly; therefore, you need a strategic method of consolidating and auditing all your logs. Without solid audit log analysis, an attack can go unnoticed for a long time. According to the 2021 Verizon Data Breach Investigations Report, The Verizon Threat Research Advisory Center intelligence collections in both 2019 and 2020 began with cyber espionage targeting cloud environments by the Chinese menuPass threat actor. Among the ongoing threats were attacks on remote access. The full report was based on detailed analysis of more than 79,600 security incidents, including 5,258 data breaches. You can download the full details at www.verizon.com/business/resources/reports/dbir/2021/year-in-review-2021
.
Logging involves collecting data from a large number of data sources, which has its own challenges including collection, storage, encryption, and parsing. Key considerations when configuring logs for collection include normalization, alerting, security, correlation. availability, monitoring, and analysis.
Normalization or parsing logs enables analysis of the data. Parsing the logs into specific fields allows for easier reading, correlation, and analysis. Correlation means that you are able to connect the dots to identify a sequence of events that have the potential to be a breach. Monitoring and alerting for specific events, or after analysis has been done on several scenarios that are being monitored for security incidents, is important because it is a more proactive approach. To be able to go back into storage requires availability. The logging solution chosen must make sure that the logs are not only secure but provide data compression and other procedures to address the high volume.
Without logging, a threat actor can be in an environment and fly completely under the radar. There are many solutions for logging for audit and centralization, including Elasticsearch, Logstash, and Kibana (ELK), which is the most common open-source solution used. There are some security information and event management (SIEM) tools that are customizable for security analytics. SIEM tools centralize information gathering and analysis and provide detailed dashboards and reporting that allow critical information to be seen through visualization, and they offer manual analysis as well as automated analysis capabilities. They work with logging infrastructures using tools such as syslog, syslog-ng, or others that gather and centralize logs, building logging infrastructures that capture evidence used for incident analysis and creating an audit trail. At the same time, additional information captured such as network flows and traffic information, file and system metadata, and other artifacts are used by responders who need to analyze what occurred on a system or network.
Gartner's Magic Quadrant report for 2021 includes Splunk, Rapid7 InsightIDR, LogRhythm, and Exabeam. These SIEM tools work by taking a baseline of two to four weeks of log ingestion to learn the normal state of an organization and then can start monitoring and alerting for anomalies.
The cloud is not a single object. There are many moving parts that affect performance and availability, with each part needing to work well with other parts. When looking at monitoring a cloud environment, you have to watch the network, the individual VMs, the databases, the websites, and storage. With the network, cloud administrators have to watch for connectivity to make sure they are not overwhelmed with traffic. The VMs will have to be monitored for access and status to make sure they are operating as intended. Database monitoring is incredibly important because of what is in the database, usually sensitive organizational data. Databases will need to be monitored for queries, access requests, integrity, and backups. Proactively monitoring websites will allow for optimal uptime, and storage is costly, so making sure performance and analytics are kept within proper ranges will keep expenses down.
Some cloud monitoring best practices may include the following:
It is difficult to keep up with shifting policies and compliance, but with automation open-source software like Puppet or other tooling such as SolarWinds, the organization must be able to monitor the cloud ecosystem for changes and revert as needed. It can be time-consuming and tedious to manually do audit inspections to prepare for an auditor. Puppet is mostly used on Linux and Windows, but it's capable of managing infrastructure through continuous monitoring using policy as code. For more information, visit puppet.com/use-cases/continuous-compliance
.
Chapter 6, “Cryptography and PKI,” covered public key infrastructure, which allows two parties to communicate securely even if they were previously unknown to one another. This chapter covers the certificate authority (CA), registration authority (RA), certificate revocation lists (CRLs), digital certificates, and how they are distributed. The question this chapter covers is how this process can be different in a cloud-based ecosystem.
Most cloud service providers offer some type of encryption for customer data. As mentioned earlier in this chapter, protecting data in transit using HTTPS in the cloud between servers or user devices is reliable and straightforward. Encryption protection becomes more complicated for data at rest on a cloud server. Cloud providers can encrypt the data and maintain control over the keys. This can be a security risk because now a company is dealing with malicious insiders and nefarious outsiders who target the cloud provider. Many organizations are not willing to use cloud-based storage for their most sensitive data.
The other two options for key ownership and location are bring your own key (BYOK) and hold your own key (HYOK). With BYOK, the customer can generate and manage encryption keys, but the cloud provider has access to them. With HYOK, customers generate, manage, and store them, and the cloud provider is not able to see the contents of the encrypted files.
With the use of keys in securing cloud environments comes the need for a key management system. National Institute of Standards and Technology (NIST) special publication 800-57 Part 2, Revision 1, gives recommendations for key management and best practices and introduces a set of key management concepts such as key life cycle, practice statements, policies, and planning documents.
Key life-cycle management refers to everything from the creation of to the retirement of cryptographic keys. There are several key life-cycle management models that can be used such as NIST or Microsoft. The six states a key goes through in the Microsoft Key Life-Cycle Model are as follows:
Defining and enforcing key management policies will influence each state of the life cycle and needs to be governed by a key usage policy that defines the cloud assets and applications and what operations those asset and applications can perform.
Details on this publication can be found here: nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-57pt2r1.pdf
.
As our cybersecurity industry moves to a software-defined infrastructure using virtualization, cloud infrastructure, and containers, this translates to systems that would have once been backed up not being backed up in a traditional way. Instead, the code that defines them is backed up, as well as the key data that they are intended to provide or to access. This changes the policies and procedures for server and backup administrators, and it means that habits around how backup storage is accomplished and maintained are changing for backup and recovery. Reviewing organizational backup habits to see if they match new frameworks or if current procedures are failing disaster recovery tests is an important element in planning.
For on-premise systems, some organizations choose to utilize physical storage either at a site they own and operate or via a third-party service that specializes in storing secure backups in environmentally controlled facilities, or they choose cloud-based offsite storage for their backup media. Offsite storage is a form of geographic diversity and helps to ensure that a single disaster cannot destroy an organization's data entirely. This is done both physically and in the cloud for BCDR. For geographic diversity, distance considerations are important to ensure that a single regional disaster is unlikely to harm the offsite storage. BCDR plans define the processes and procedures that an organization will take when a disaster occurs, which is equally important when those assets are in the cloud.
In a BCDR plan, processes describe all of the documented, procedural “how-tos” of the organization's way of conducting operations. Some processes are so routine and so ingrained in the minds of those performing them that they hardly seem necessary to document. On the other end of the spectrum, some processes must be documented because they are performed only under extraordinary circumstances, perhaps even in times of crisis, such as, for example, during disaster recovery.
Preparation for an incident or disaster includes building a team, putting policies and procedures in place, conducting exercises, and building the technical and information-gathering infrastructure that will support incident response needs. These plans cannot exist in a vacuum. Instead, they are accompanied by communications and stakeholder management plans, as well as other detailed response processes unique to each organization.
As mentioned earlier in this chapter, infrastructure as a service (IaaS) is a type of cloud computing model that offers essential computation, storage, and networking on demand. Migrating an organization to an IaaS model helps an enterprise reduce the number of physical datacenters needed and gives an organization a great deal of flexibility to spin resources up and down as needed. Serverless computing is a type of infrastructure as a service with a slightly different strategy.
In serverless computing, you don't worry about the infrastructure and configuration; everything is managed by the cloud provider. This means that cloud customers are paying for the quantity of times their code runs on a serverless service. This enables developers to build applications faster while the cloud service automatically handles all tasks required to run the code. Some serverless offerings are workflows, Kubernetes, and application environments where the back and front ends are fully hosted.
Software that is written for application virtual machines allows the developer to create one version of the application so that it can be run on any virtual machine and won't have to be rewritten for every different computer hardware platform. Java Virtual Machine (JVM) is an example of such application virtualization.
Software-defined networking (SDN) is a technology that allows network professionals to virtualize the network so that control is decoupled from hardware and given to a software application called a controller.
In a typical network environment, hardware devices such as switches make forwarding decisions so that when a frame enters the switch, the switch's logic, built into the content addressable memory (CAM) table, determines the port to which the data frame is forwarded. All packets with the same address will be forwarded to the same destination. SDN is a step in the evolution toward programmable and active networking in that it gives network managers the flexibility to configure, manage, and optimize network resources dynamically by centralizing the network state in the control layer.
Software-defined networking allows networking professionals to respond to the dynamic needs of modern networks. With SDN, a network administrator can shape traffic from a centralized control console without having to touch individual switches. Based on demand and network needs, the network switch's rules can be changed dynamically as needed, permitting the blocking, allowing, or prioritizing of specific types of data frames with a very granular level of control. This enables the network to be treated as a logical or virtual entity.
SDN is defined as three layers: application, control, and the infrastructure or data plane layer. At the core of SDN is the OpenFlow standard. OpenFlow is defined by the Open Networking Foundation (ONF). OpenFlow provides an interface between the controller and the physical network infrastructure layers of SDN architecture. This design helps SDN achieve the following, all of which are limitations of standard networking:
The definition of misconfiguration is to configure a system incorrectly. Cloud misconfiguration seems avoidable, but according to the IBM Security Cost of a Data Breach Report in 2021, the cost of a breach is $4.24 million, and two-thirds of cloud breaches can be traced to misconfiguration, specifically cloud application programming interfaces.
The cloud has many settings, assets, services, resources, and policies, and that makes it an environment that is difficult to set up correctly. It is even more true for organizations that have had to migrate quickly to the cloud for remote work with an IT department that does not fully understand the details of configuration. Misconfiguration is one of the leading causes of financial damage done to enterprise as well as governmental organizations.
Cloud providers like AWS have a service called Cloud Conformity Checks. These are rules run against the customer's configuration or infrastructure. These scans will take a rule, run it against a system, and determine whether it was successful. According to AWS, the top service scanned in 2021 for misconfiguration was Amazon Elastic Compute Cloud, better known as EC2. The rule most broken in 2021 was AWS CloudTrail Configuration Changes. CloudTrail is a service that enables governance, compliance, and auditing and keeps an organization in compliance with APRA, MAS, and NIST4.
With this tool, you can log, continuously monitor, and retain account activity related to actions across the AWS infrastructure, providing event history of AWS account activity, including actions taken through the Management Console, command-line interface (CLI), AWS SDKs, and APIs. This event history feature simplifies security auditing, resource change tracking, and troubleshooting. You can identify who or what took which action, what resources were acted upon, when an event occurred, and other details that can help you analyze and respond to any activity within your account.
According to the International Engineering Consortium, unified communications and collaboration is an industry term “used to describe all forms of call and multimedia/cross-media message-management functions controlled by an individual user for both business and social purposes.” This topic is of concern to the CASP+ because communication systems form the backbone of any company. Communication systems can include any enterprise process that allows people to communicate.
Web conferencing is a low-cost method that allows people in different locations to communicate over the Internet. While useful, web conferencing can potentially be sniffed and intercepted by an attacker. Attackers inject themselves into the stream between the web conferencing clients. This could be accomplished with tools such as Ettercap or Cain & Abel, and then an attacker starts to capture the video traffic with a tool such as UCSniff or VideoSnarf. These tools allow the attacker to eavesdrop on video traffic. Most of these tools are surprisingly easy to use in that you capture and load the web conferencing libpcap-based file (with the .pcap
extension) and then watch and listen to the playback. Exercise 9.6 shows you how to perform a basic web conference capture.
Today, many businesses make use of videoconferencing systems. Videoconferencing is a great way for businesses to conduct meetings with customers, employees, and potential clients. If videoconferencing systems are not properly secured, there is the possibility that sensitive information could be leaked, and considering how much of the global workforce is working and schooling from home, this could be a big risk. Most laptops and even some desktop systems come with webcams, and there are a host of programs available that will allow an attacker to turn on a camera to spy on an individual. Some of the programs are legitimate, while others are types of malware and Trojan horses designed specifically to spy on users. One example is gh0st Rat. This Trojan was designed to turn on the webcam, record audio, and enable built-in internal microphones to spy on people. You can read more about this malware here: attack.mitre.org/software/S0032
.
To prevent these types of problems, you should instruct users to take care when opening attachments from unknown recipients or installing unknown software and emphasize the importance of having up-to-date antivirus. Also, all conference calls should require strong passcodes to join a meeting, and the passcodes for periodic meetings should be changed for each meeting.
When videoconferencing, a user often has the obvious indication that conferencing is still ongoing; that is, they see the screen of their conferenced co-workers. Such is not the case with audio conferencing. When sharing an open line, for example on a telephone, an employee can easily forget that all ambient noise will be heard by all of the conference attendees.
Instant messaging (IM) has been around a long time and evolved into modern corporate landscapes in tools like Microsoft Teams, Slack, and Discord Server. It is widely used and available in many home and corporate settings. What has made IM so popular is that it differs from email in that it allows two-way communication in near real time. It also lets business users collaborate, hold informal chat meetings, and share files and information. Although some IM platforms have added encryption, central logging, and user access controls for corporate clients, others operate without such controls.
From the perspective of the CASP+, IM is a concern due to its potential to be a carrier for malware. IM products are all highly vulnerable to malware, such as worm viruses, backdoor Trojan horses, hijacking and impersonation, and denial of service. IM can also be used to send sensitive information. Most of this is because of the file transfer and peer-to-peer file sharing capabilities available to users of these applications. Should you decide to use IM in your organization, there are some basic questions that you need to address:
Desktop sharing software is nothing new. Some early examples of desktop sharing programs were actually classified as malware. One such program is Back Orifice (BO), released in 1998. Although many other remote Trojan programs have been created, such as NetBus and Poison Ivy, BO was one of the first to have the ability to function as a remote system administration tool. It enables a user to control a computer running the Windows operating system from a remote location. Although some may have found this functionality useful, there are other functions built into BO that made it much more malicious. BO has the ability to hide itself from users of the system, flip the images on their screens upside down, capture their keystrokes, and even turn on their webcams. BO can also be installed without user interaction and distributed as a Trojan horse.
Desktop sharing programs are extremely useful, but there are potential risks. One issue is that anyone who can connect and use your desktop to execute or run programs on your computer. A search on the Web for Microsoft Remote Desktop Services returns a list of hundreds of systems to which you can potentially connect if you can guess the username and password.
At a minimum, these ports and applications and related ports should be blocked and restricted to those individuals who have a need for this service. Advertising this service on the Web is also not a good idea. If this is a public link, it should not be indexed by search engines. There should also be a warning banner on the page that states that the service is for authorized users only and that all activity is logged.
Another issue with desktop sharing is the potential risk from the user's point of view. If the user shares the desktop during a videoconference, then others in the conference can see what is on the presenter's desktop. Should there be a folder titled “why I hate my boss,” everyone will see it.
Application sharing is fraught with risks as well. If the desktop sharing user then opens an application such as email or a web browser before the session is truly terminated, anybody still in the meeting can read and/or see what's been opened. Any such incident looks highly unprofessional and can sink a business deal.
Table 9.2 lists some programs and default port numbers to be aware of.
TABLE 9.2 Legitimate and malicious desktop sharing programs
Name | Protocol | Default Port |
---|---|---|
Back Orifice | UDP | 31337 |
Back Orifice 2000 | TCP/UDP | 54320/54321 |
Beast | TCP | 6666 |
Citrix ICA | TCP/UDP | 1494 |
Loki | ICMP | NA |
Masters Paradise | TCP | 40421/40422/40426 |
Remote Desktop Control | TCP/UDP | 49608/49609 |
NetBus | TCP | 12345 |
Netcat | TCP/UDP | Any |
Reachout | TCP | 43188 |
Remotely Anywhere | TCP | 2000/2001 |
Remote | TCP/UDP | 135-139 |
Timbuktu | TCP/UDP | 407 |
VNC | TCP/UDP | 5800/5801 |
Remote assistance programs can be used to provide temporary control of a remote computer over a network or the Internet to resolve issues or for troubleshooting purposes. These tools are useful because they allow problems to be addressed remotely and can cut down on the site visits that a technician performs.
Presence is an Apple software product that is somewhat similar to Windows Remote Desktop. Presence gives users access to their Mac's files wherever they are. It also allows users to share files and data between a Mac, iPhone, and iPad.
Nominally, a device operates in one network, viewing traffic intended for that network domain. However, when the device is connected via remote assistance software or a virtual private network connection to a corporate network, it is conceivable that device can bridge two network domains. Unauthorized domain bridging is a security concern with which the CASP+ needs to be familiar.
Many individuals would agree that email is one of the greatest inventions to come out of the development of the Internet. It is the most used Internet application. Just take a look around the office and see how many people use Android phones, iPhones/iPads, tablets, and other devices that provide email services. Email provides individuals with the ability to communicate electronically through the Internet or a data communications network.
Although email has many great features and provides a level of communication previously not possible, it's not without its problems. Now, before we beat it up too much, you must keep in mind that email was designed in a different era. Decades ago, security was not as much of a driving issue as usability. By default, email sends information via clear text, so it is susceptible to eavesdropping and interception. Email can be easily spoofed so that the true identity of the sender may be masked. Email is also a major conduit for spam, phishing, and viruses. Spam is unsolicited bulk mail. Studies by Symantec and others have found that spam is much more malicious than in the past. Although a large amount of spam is used to peddle fake drugs, counterfeit software, and fake designer goods, it's more targeted to inserting malware via malicious URLs today.
As for functionality, email operates by means of several underlying services, which can include the following:
Basic email operation consists of the SMTP service being used to send messages to the mail server. To retrieve mail, the client application, such as Outlook, may use either POP or IMAP. Exercise 9.7 shows how to capture clear-text email for review and reinforces the importance of protecting email with PGP, SSL/TLS, or other encryption methods.
The CASP+ should work to secure email and make users aware of the risks. Users should be prohibited by policy and trained not to send sensitive information by clear-text email. If an organization has policies that allow email to be used for sensitive information, encryption should be mandatory.
Several solutions exist to meet this need. One is Pretty Good Privacy (PGP). Other options include link encryption or secure email standards such as Secure Multipurpose Internet Mail Extensions (S/MIME) or Privacy Enhanced Mail (PEM).
Businesses with legacy PBX and traditional telephony systems are especially vulnerable to attack and misuse. One of the primary telephony threats has to do with systems with default passwords. If PBX systems are not secured, an attacker can attempt to call into the system and connect using the default password. Default passwords may be numbers such as 1, 2, 3, 4, or 0, 0, 0, 0. An attacker who can access the system via the default password can change the prompt on the voice mailbox account to “Yes, I will accept the charges.” The phone hacker then places a collect call to the number that has been hacked. When the operator asks about accepting charges, the “Yes” is heard and the call completes. These types of attacks are typically not detected until the phone bill arrives or the phone company calls to report unusual activity. Targets of this attack tend to be toll-free customer service lines or other companies that may not notice this activity during holidays or weekends.
A CASP+ should understand that the best defense against this type of attack is to change the phone system's default passwords. Employees should also be prompted to change their voicemail passwords periodically. When employees leave (are laid off, resign, retire, or are fired), their phones should be forwarded to another user, and their voicemail accounts should be immediately deleted.
Once upon a time, a network engineer was asked to run data over existing voice lines. Years later, another company asked him what he thought about running voice over existing data lines. This is the basis of VoIP. VoIP adds functionality and reduces costs for businesses, as it allows the sharing of existing data lines. This approach is typically referred to as convergence—or as triple play when video is included.
Before VoIP, voice was usually sent over the circuit-switched public switched telephone network (PSTN). These calls were then bundled by the phone carrier and sent over a dedicated communications path. As long as the conversation continued, no one else could use the same fixed path.
VoIP changes this because VoIP networks are basically packet-switched networks that utilize shared communications paths easily accessible by multiple users. Since this network is accessible by multiple users, an attacker can attempt to launch an on-path attack. An on-path attack allows an attacker to sit between the caller and the receiver and sniff the voice data, modify it, and record it for later review. Sniffing is the act of capturing VoIP traffic and replaying it to eavesdrop on a conversation. Sophisticated tools are not required for this activity. Easily available tools such as Cain & Abel (www.oxid.it
) make this possible. Expensive, specialized equipment is not needed to intercept unsecured VoIP traffic.
Exercise 9.8 demonstrates how Cain & Abel can be used to sniff VoIP traffic. It's also worth mentioning that if network equipment is accessible, an attacker can use Switched Port Analyzer (SPAN) to replicate a port on a switch and gain access to trunked VoIP traffic. It's important that the CASP+ understand the importance of placing physical controls so that attackers cannot get access to network equipment.
Although VoIP uses TCP in some cases for caller setup and signaling, denial of service (DoS) is a risk. VoIP relies on some UDP ports for communication. UDP can be more susceptible to DoS than TCP-based services. An attacker might attempt to flood communication pathways with unnecessary data, thus preventing any data from moving on the network. Using a traditional PSTN voice communication model would mean that even if the data network is disabled, the company could still communicate via voice. With convergence, a DoS attack has the potential to disrupt both the IP phones and the computer network.
Yet another more recent inclusion into VoIP vulnerabilities was demonstrated at DEF CON when the presenters demonstrated that VoIP could be used as a command-and-control (C&C) mechanism for botnets. Basically, infected systems can host or dial into a conference call in order to perform a wide range of tasks, such as specifying what systems will participate in a distributed DoS (DDoS) attack, downloading new malware, or using the botnet for the exfiltration of data. This poses data loss prevention questions, to say the least. Here are some basic best practices that can be used for VoIP security:
VoIP is a replacement for the PSTN of the past. PSTN is composed of companies such as AT&T, Verizon, and smaller, localized companies still managing the lines and other public circuit-switched telephone networks. These traditional phone networks consisted of telephone lines, fiber-optic cables, microwave transmission links, and so forth that were interconnected and allowed any telephone in the world to communicate with any other.
The equipment involved was highly specialized and may have been proprietary to the telecommunications carrier, which made it much harder to attack. After all, traditional telephones were only designed to make and receive calls.
VoIP softphones can be a single application on a computer, laptop, tablet computer, or smartphone. A VoIP softphone resides on a system that has many different uses. A softphone opens another potential hole in the computer network or host that an attacker can exploit as an entry point. Hardware devices have advantages over software (softphones).
Hardware-based VoIP phones look like typical phones but are connected to the data network instead of PSTN. These devices should be viewed as embedded computers that can be used for other purposes.
A well-designed VoIP implementation requires the CASP+ to consider the design of the network and to segregate services. Using technologies like a virtual local area network (VLAN), the CASP+ can segregate data traffic from voice traffic; however, convergence is making this task much harder. Implementing VLANs correctly can drastically reduce and often eliminate the potential for sniffing attacks that utilize automated tools such as those referenced earlier, as well as many other tools that focus on this type of attack exclusively, regardless of hardware- or software-based phones. One such tool is Voice Over Misconfigured Internet Telephones (VOMIT), which deciphers any voice traffic on the same VLAN or any VLANs that it can access.
Another implementation concern is quality of service (QoS). Although no one may notice if email arrives a few seconds later, voice does not have that luxury. Fortunately, segmentation via VLANs can assist with remedying this kind of issue as well. Here are some QoS examples:
Before VoIP systems are implemented, a CASP+ must explore techniques to mitigate risk by limiting exposures of data networks from spreading to voice networks. VoIP equipment, gateways, and servers tend to use open standards based on RFCs and open protocols. This also allows an attacker to have a better understanding of the equipment and technology. If that is not enough, most of the vendors place large amounts of product information on their websites. This aids the attackers in ramping up their knowledge very quickly.
Mentioned earlier in the chapter, bit splitting is another technique for securing data over a computer network that involves encrypting data, splitting the encrypted data into smaller data units, distributing those smaller units to different storage locations, and then further encrypting the data at its new location. Data is protected from security breaches, because even if an attacker is able to retrieve and decrypt one data unit, the information would be useless unless it can be combined with decrypted data units from the other locations.
Data dispersion consists of information being distributed and stored in multiple cloud pods, which is a key component of cloud storage architecture. The ability to have data replicated throughout a distributed storage infrastructure is critical. This allows a cloud service provider to offer storage services based on the level of the user's subscription or the popularity of the item.
In this chapter, we examined the advantages and disadvantages of virtualization and cloud computing as well as the issues that they bring to enterprise security.
Cloud and virtualized computing has become the way of the future. So many advances in computing have brought about more changes than just virtualization, including network storage and cloud computing. Cloud computing changed the concept of traditional network boundaries by placing assets outside the organization's perimeter and control.
The idea of cloud computing represents a shift in thought as well as trust. The cloud service can be fully managed by the cloud provider. Consumers use the service at a rate that is set by their particular needs. Cost and ease of use are two great benefits of cloud computing, but you must consider significant security concerns when contemplating moving critical applications and sensitive data to public and shared cloud environments. To address these concerns, the cloud provider must develop sufficient controls to provide the same or greater level of security than the organization would have if the cloud was not used, and organizations will continue to evolve as new technologies become available.
Be able to explain virtualization strategies. After this chapter, you should be able to explain the difference between hypervisor types, containers, and emulation, choosing the best model for different situations.
Understand the different cloud models. The CASP+ professional should be able to explain the difference between the different cloud service models, hosting models, and the considerations of deployment including resources, protection, location, and cost.
Be able to explain cloud technology adoption and security. Understand how adoption of cloud technologies affects the entire organization from automation to encryption to logging and monitoring.
Be able to choose the correct backup and recovery methods. After this chapter, you should be able to understand the repercussions of the cloud as business continuity and disaster recovery as well as primary and alternative providers of backups.
You can find the answers in Appendix.
18.188.36.239