Forensics Readiness in Cloud Environments

18

Introduction

As discussed previously in this book, the concept of digital forensics readiness is focused primarily on reducing costs and minimizing business interruptions when performing investigations. In a cloud environment, achieving a state of digital forensics readiness is important because of the volatile nature in which cloud computing environment operate and the increased potential for service disruptions to gather evidence during an investigation.

Through the combination of several major technology concepts, cloud computing has evolved over several decades to become the next generation of computing models. As cloud computing continues to mature, providing organizations with an inexpensive means of deploying computing resources, it is driving a fundamental change in the ways technology is becoming a common layer of service-oriented architectures.

Cloud computing presents unique challenges to an organization’s digital forensics capabilities because of the dynamic nature in which information exists and a shift where organizations have less control over physical infrastructure assets. This leads to the inherent challenge of maintaining best practices for cloud computing while continuing to enable digital forensics capabilities.

Brief History of Cloud Computing

When thinking of cloud computing, commonly we think of historical milestone when ideas and solutions started to arise throughout the twenty-first century; however, cloud computing is not a new concept. The reality is that the concepts that eventually led to cloud computing have existed for several decades, building out an infrastructure path that eventually led to formalizing the computing models.

Dating back to the 1950s, the fundamental concepts of cloud computing emerged with the introduction of mainframe systems. When organizations started to prioritize the efficiency of their large-scale computing resources, where multiple users could simultaneously access a central computer system using terminals, the gradual evolution towards cloud began. Because technology was rather costly to buy and maintain at this time, providing shared access to a single resource was an economical solution that made sense for organizations who used it.

Move forward to the 1970s, when the concept of virtual machines (VM) emerged where it was now possible to execute one or more operating systems (OS) simultaneously inside one single physical piece of hardware. This technology advancement was an important catalyst in taking shared computing to the next level and further evolving communication and information sharing capabilities.

During the 1990s, the World Wide Web (WWW) exploded onto the scene, allowing Internet-based computing to really take off. Before the Internet, telecommunication providers could only offer single and dedicated point-to-point connections. Now, the concept of virtual private network connections was introduced, so instead of building out physical infrastructure for each connection, organizations could leverage shared access using the same physical infrastructures. At this time, cloud computing was in its infancy; it was enabling electronic business (eBusiness) such as online shopping, streaming content, and managing bank accounts.

Following the dot-com explosion in the early 2000s, several organizations played key roles in the further development of cloud computing services where the availability of high-capacity networks and low-cost computing resource was introduced, together with pervasive adoption of virtualization and service-oriented architectures. During this time, cloud computing was maturing to a point at which it was providing expanded information technology (IT) as a service capability, for example, virtualized environments for storage and computing capacity.

Today, most cloud-based service attention focuses on enterprises that focus on using it as an alternative for sourcing technology resources and capacity. As cloud computing evolves into its next level of maturity, the concept of “everything as a service” will enable most enterprise infrastructures and applications to be sourced through on-demand service models.

What Is Cloud Computing?

The origin of the term “cloud” in relation to digital forensics stems back to the telecommunications world where networks and the Internet were commonly visualized on diagrams depicted as clouds. Generally, the use of cloud in these diagrams signified areas where information was moving and being processed without persons needing to know what was happening. This philosophy is still central to cloud computing today where the customer requests and receives information and services without knowing where they resides or how they are transmitted.

Generally, cloud computing is a model for enabling convenient and on-demand delivery of computing resources (i.e., systems, storage, applications) over a network (i.e., Internet) that can be rapidly provisioned and released with minimal effort or interaction. From all advancements made throughout history, the major technology concepts that ultimately explain the evolution and creation of cloud computing are:

•  Grid computing to solve large problems using parallel computing systems

•  Utility computing to offer computing resources as a metered service

•  Software as a service (SaaS) to allow for network-based subscriptions to applications

•  Cloud computing to provide anywhere and anytime access to computing resources that are delivered dynamically as a service

Characteristics

Within all cloud computing infrastructures, there are five essential characteristics as follows:

•  Rapid elasticity: With cloud computing, it is challenging for cloud service providers (CSP) to anticipate usage volumes or demands. Therefore, cloud capabilities need to provide dynamic scalability, in some cases automatically, to rapidly meet the customer’s computing resourcing demands.

•  On-demand self-service: This offers the ability to utilize a self-service model via which consumers can automatically provision and release computing resources, such as systems or storage, as needed without requiring human interaction with the CSP.

•  Broad network access: In a society that is always connected from anywhere, cloud computing services need to be available over networks through standard interfaces that promote use by a wide variety of platforms (e.g., mobile devices, laptops).

•  Measured service: Cloud systems have metering capabilities that have a level of abstraction appropriate to the type of service (i.e., bandwidth, storage, users) so that resources can be monitored, controlled, and report transparently.

•  Resource pooling: Providing cloud computing under a multi-tenant model, both physical and virtual resources are dynamically assigned, and reassigned based on demand, without consumers having control or knowledge over where resource are located (i.e., country, data center).

Service Models

Within all cloud computing infrastructures, there are three distinct service models as follows:

•  Software as a service (SaaS): Consumers are provided the capability to use the provider’s applications running within the cloud infrastructure. These applications are commonly accessible from a variety of client devices (e.g., web browser, program interface). Consumers do not manage or control the underlying cloud infrastructure (i.e., network, systems, storage, applications), except where user-specific application configurations are permitted.

•  Platform as a service (PaaS): Consumers are provided with capabilities to deploy, onto the cloud infrastructure, any applications they have created or acquired using the programming tools (i.e., languages, libraries, services) supported by the CSP. Consumers do not manage or control the underlying cloud infrastructure (i.e., network, systems, storage) but have control over deployed applications and the user-specific configurations.

•  Infrastructure as a service (Iaas): Consumers are provided with capabilities to provision and release computing resources (i.e., processing, storage, networks) where operating systems (OS) and applications can be used. Consumers do not manage or control the underlying cloud infrastructure (i.e., network, systems) but have control over the operating systems, applications, storage, and select network components (e.g., host-based firewalls).

Delivery Models

Within all cloud computing infrastructures, there four types of deployment models as follows:

•  Private cloud: This model is provisioned exclusively for use by a single organization. It may exist either on or off the organization’s premises, where it can be owned, managed, and operated by the organization or by the CSP.

•  Community cloud: This model is provisioned for use by a specific community of consumers that have a shared interest (e.g., security requirements, compliance needs). It may exist either on or off the organization’s premises, where it can be owned, managed, and operated by the organization or by the CSP.

•  Public cloud: This model is provisioned for open use by the public. It exists exclusively within the CSP premises and can be owned, managed, and operated by either the CSP or another entity (i.e., organization, academic, managed service provider [MSP]).

•  Hybrid cloud: This model is provisioned as any combination of two or more other cloud models (i.e., private, community, public) bound together by technologies that enable data and application portability (i.e., load balancing).

Isolation Models

Within all cloud computing infrastructures, there two types of isolation models as follows:

•  Dedicated: Where infrastructure is reserved, and isolated, for specific users or customers

•  Multi-tenant: Where infrastructure is shared amongst several groups of users or customers

Illustrated in Figure 18.1, each of the three models discussed above are shown to be complementary building blocks to the others and form the basis for which cloud computing environments are created.

Image

Figure 18.1 Cloud computing model dimensions.

Challenges with Cloud Environments

Cloud computing has transformed the ways in which electronically stored information (ESI) is stored, processed, and transmitted. From the information security perspective, corporate information being stored in these services is, for the most part, beyond the boundaries of IT control and is increasingly vulnerable because the controls may (or may not) meet their security requirements (i.e., encryption, data residency, logical access).

Challenges often faced when conducting a forensics investigation in a cloud environment primarily revolve around the control of ESI, especially when it comes to gathering and processing it in a forensically sound manner. These broadly categorized technical, legal, and organizational challenges can impede or ultimately prevent the ability to conduct digital forensics. While cloud computing possesses similarities to its predecessor technologies, the introduction of this new model presents significant challenges when applying traditional digital forensics methodologies and techniques.

Outlined in the following sections, challenges within cloud computing environments cannot be solved solely based on technology, law, or organizational principles. Rather, overcoming these difficulties requires an approach combining technology, law, and organizational principles to develop mitigation strategies based on people, processes, and technology.

Mobility

Mobile devices—as a business tool—have changed the “where and how” aspects of the data-centric security approach when it comes to the storage of an organization’s informational assets. For example, informational assets that have been entrusted to an organization—by customers or employees—can be configured to synchronize across multiple devices, or other cloud-based services, which aggravates issues of data residency and increases the possibility of this ESI being compromised, lost, or stolen. Refer to Chapter 9, “Determine Collection Requirements,” for further discussion on the data-centric security methodology.

Hyper-Scaling

Generally, hyper-scale environments are distributed computing infrastructures where ESI volumes, and demand for certain processing types, can increase exponentially and be accommodated quickly in a cost-effective manner. Virtual resources used are extremely short-lived and can also use container orchestrations, discussed below, to manage thousands of instances. These “containers” are often associated with cloud computing because they help organizations become more efficient, use less power, and respond quickly to customer demands. The non-persistent and volatile nature of ESI within these environments can leave little, if any, digital evidence for gathering and processing.

Containerization

For the longest time, a traditional means of deploying software applications was as much a science as it was a systematic process. Similarly, the modern use of container orchestrations to deploy software applications allows for standardization of the underlying computing environment and takes away the need to depend on operating system version and hardware specifications. As the use of containers grew, there was a need to better manage the extraction of containers much in the same way as that used with data centers with traditional computing systems. The reality is that many of the existing digital forensics tools and processes are not aware of or capable of analyzing containers, so alternatives must be explored.

First Responders

When responding to a security incident where infrastructure is not owned or directly managed by the organization, such as when cloud-based services are managed by a CSP, there is a need to rely on others to perform initial triage tasks and functions. The reality is that most organizations are often faced with concerns related to the competence and trustworthiness of incident first responders. While contractual service level objectives (SLO) and service level agreements (SLA) can be defined to ensure CSPs respond accordingly, a joint incident response plan needs to be developed with the CSPs to outline how to manage several types of security incidents.

Evidence Gathering and Processing

With cloud-based systems managed by CSPs, organizations don’t have direct access to the technologies to gather and process evidence following traditional methodologies and techniques. As result, collection and preservation of cloud-based evidence that is relevant to a specific organization’s investigation can be challenging where factors such as multi-tenancy, distributed resourcing (cross-borders), or volatile data are persistent. Furthermore, organizations may encounter issues where correlation and reconstruction of events are not easily achieved because artifacts exist within multiple CSPs or across several virtual images.

Forensics Readiness Methodology

Following traditional methodologies, as illustrated in Figure 18.2 and discussed further in Chapter 2, “Investigative Process Methodology,” digital forensics investigations normally follow an approach whereby evidence is searched for (identified), seized (collected and preserved), and analyzed (processed). However, traditional investigative methodologies were not designed with cloud computing in mind and, given the dynamic and volatile nature of cloud computing, following this traditional approach to digital forensics investigations is not suitable.

As an alternative, organizations need to optimize their investigative process by taking proactive steps to guarantee that evidence will be readily available if (and when) needed from their cloud computing environments. Throughout the sections below, each step outlined for implementing digital forensics readiness will be discussed vis a vis improving investigative capabilities within cloud computing environments.

Step #1: Define Business Risk Scenarios

Digital forensics investigations in a cloud environment require organizations to follow a proactive approach whereby controls and measures have been implemented to guarantee digital evidence will be available when (and if) needed.

Whether within an organization’s control (i.e., internal network) or located in a CSP, the business risk scenarios where digital forensics readiness demonstrates positive benefits are similar and include:

1.  Reducing the impact of cybercrime

2.  Validating the impact of cybercrime

3.  Producing evidence to support organizational disciplinary issues

4.  Demonstrating compliance with regulatory or legal requirements

5.  Effectively managing the release of court-ordered data

6.  Supporting contractual and commercial agreements

Image

Figure 18.2 High-level digital forensics process model.

Rather than assuming the use of a cloud environment will limit the span of business risk scenarios, it is recommended to ensure all six scenarios are included right from the outset of the cloud engagement. By doing so, organizations have established a wide scope of risk that allows them to be better positioned for focusing on specifics, rather than establishing a narrow scope and having to expand after identifying missed evidence.

Refer to Chapter 7, “Defining Business Risk Scenarios,” for further discussion on the six business risk scenarios applied to digital forensics readiness.

Step #2: Identify Potential Data Sources

Cloud computing presents a unique challenge because of the dynamic nature in which information exists as well as a shift landscape in which organizations have less control over physical infrastructure assets. This leads to the inherent challenge of maintaining best practices for cloud computing while continuing to enable digital forensics capabilities.

Cloud computing has revolutionized the ways in which ESI is stored, processed, and transmitted. There are numerous challenges facing the digital forensics community when it comes to gathering and processing digital evidence in cloud computing environments. While cloud computing possesses similarities to its predecessor technologies, the introduction of this operating model presents challenges to digital forensics, as illustrated in Figure 18.1.

Virtualization is the foundational technology for cloud computing environments because it is a cost-effective means of quickly provisioning technology resources. For the most part, systems hosted in these virtual environments produce digital artifacts similar to those found in traditional computer systems with physical hardware. However, within these rapidly elastic virtual environments exists a networking backplane of system communications that do travel beyond the physical host system where virtualization is being run.

Because of how this internal backplane operates, all indicators that an attack is moving between virtualized systems are not going to be available in typical technology-generated log files because of the way in which virtualization works. Where this type of internal communication exists, digital forensics practitioners need to remember that network communications between virtualized systems can be observed only by using network forensics tools and techniques directly on the physical host system.

As illustrated in Figure 18.3, virtualized systems have an underlying host environment (hardware and software) where digital evidence can be generated and collected. When a virtual system is involved in an incident, or an incident is discovered during an investigation, it is important that all data objects associated with the virtual systems be gathered from both host and guess systems. These data objects may include the following:

Image

Figure 18.3 Virtualization architecture.

•  Virtual machine images, which are files that contain a guest operating system, file system, and data objects

•  Log files containing information such as virtual disk partitioning, virtual networking settings, or state configurations

•  Dump files from random access memory (RAM) or paging files

Refer to Chapter 8, “Identify Potential Data Sources,” for further discussion on creating an inventory of digital evidence data sources as applies to digital forensics readiness.

Step #3: Determine Collection Requirements

In cloud environments, gathering and processing evidence is not as straightforward as it is with ESI located within an organization’s traditional computer systems and technologies. For example, a common challenge with cloud computing is that in most cases, physical access to hardware running cloud instances is often unfeasible, making search and seizure quite impossible. Alternatively, organizations can proactively collect ESI from cloud environments that could be required during a digital forensics investigation.

Where direct access to the hardware is not permitted, gathering and processing digital evidence needs to follow traditional forensics principles, methodologies and techniques. As a result, collection and preservation of cloud-based evidence that is relevant to a specific organization’s investigation can be challenging where factors such as multi-tenancy, distributed resourcing (cross-borders), or volatile data are persistent. This can be an overwhelming task to undertake, as cloud computing environment are quite dynamic in nature and the rate at which ESI changes can’t be kept up with through manual processes.

Understandably, developing enterprise strategies for cloud computing is subjective to the business profile and use cases of each organization and ultimately should be done following a risk-based methodology so that informational assets are not unknowingly or accidentally exposed to unauthorized parties. While the extent to which cloud strategies should be develop is beyond the scope of this book, the following are necessary for determining evidence collection requirements in cloud computing environments.

Enterprise Management Strategies

Preparing the underlying infrastructure to holistically and practically support digital forensics capabilities; however, every variation of cloud models being used will have several types and forms of ESI that can be used as digital evidence. A key factor with enabling any digital forensics capability within an organization is to ensure that a comprehensive approach to designing the architectural and technical models that make up cloud environments is done by applying complimentary administrative, technical, and physical controls that are given equal treatment.

Developing enterprise strategies for cloud computing is subjective to each organization and their respective business requirements; and should be done following a risk-based methodology so that informational assets are not unknowingly or accidentally expose to unauthorized parties. For example, the following are two significant components for enabling cloud computing as it pertains to enhancing digital forensics capabilities.

Cloud Computing Governance

Implementing technology to secure cloud computing, as a precursor to enabling digital forensics capabilities, is only one piece of an organization’s broader strategy to govern use of and access to these technologies. Before digital forensics capabilities can be realized, there needs to be documentation approved that establishes the requirements for using cloud-based services to secure data storage and access, as well as what is considered acceptable and unacceptable conduct. Combined with the documentation created through the organization’s information security governance framework, standard operating procedures (SOP) are the backbone for performing digital forensics within cloud computing environments.

Within the information security governance framework, there needs to be a series of documents that specifically addresses the use of and access to the cloud computing environment with respect to the organization’s data. These documents provide the organization with a foundation for planning the eventual enablement of cloud computing capabilities, guidelines for user behavior and conduct, as well as a driver for enabling digital forensics capabilities.

For example, the following corporate governance needs to be implemented before cloud computing should be enabled:

•  A code of conduct is a high-level governance document that sets out the organization’s values, responsibilities, and ethical obligations. This governance document provides the organization with guidance for handling different situations related to employee behaviors and actions.

•  An acceptable use policy is designed to govern the use of cloud computing environments so that employees know what the organization considers to be acceptable and unacceptable behavior and activity.

Security and Configuration Standards

Largely, security controls within cloud computing environments are no different than those found within traditional IT environments. However, there is a difference in the models that cloud computing employs, which may present slightly different risk profiles than traditional IT environments have.

With cloud computing environments, the scope of security responsibilities for the CSP and consumer differ based on the models used. Understanding the difference in how security controls are deployed between cloud service models is critical for organizations as they seek to manage the business risk of using cloud computing environments, for example:

•  In SaaS environments, the scope of security controls—such as SLO, privacy, and compliance—are negotiated as part of the terms and conditions outlined in formal contractual agreements.

•  In IaaS environments, CSPs are responsible for implementing security controls for the underlying infrastructure and abstraction layers, while the consumer is responsible for the remainder of the stack (i.e., OS, applications, etc.).

•  In PaaS environments, securing the platform is the responsibility of the CSP, while securing applications (either developed or purchased) belongs to the consumer.

Reference Architectures

Knowing that there are different methods, services, and responsibilities for securing different cloud computing models, organizations are faced with significant challenges when it comes to properly assessing risk and determining the level of security controls needed to protect their informational assets.

Reference architectures (RA) provide a comprehensive and formal means to overlay security within cloud infrastructures. Using an RA to secure cloud-based services lays out a risk-based approach for organizations to determine CSP responsibilities for implementing specific controls throughout a cloud ecosystem. Generally, the RA framework provides a high-level summary to:

•  Identify the core security components that can be implemented in a cloud ecosystem;

•  Provide the core set of security components, based on deployment and service model, that are within the responsibility of the CSP; and

•  Define the formal architectures that add security-centric layers to cloud computing environments.

Illustrated in Figure 18.4, there are multiple layers of interactions found throughout enterprise technology stacks where security controls can be deployed and implemented. As modern technologies—such as mobile devices, virtualization, and cloud computing—continue to proliferate as tools for conducting business, organizations are increasingly faced with the need to expose their business records and applications beyond the borders of their traditional network perimeter.

Addressing security requirements in cloud computing environments should follow the traditional risk-based approach that focuses on agnostic controls—that can be applied to most systems or software development methodologies—to reduce their attack surface. Alternatively, instead of managing the security of cloud-based solutions through specific technology components, organizations should manage their attack surfaces using security control families based on the type of cloud models deployed. Examples of security control families relevant to cloud computing environments include:

Image

Figure 18.4 Security controls layers.

•  Access controls

•  Awareness and training

•  Audit and accountability

•  Security assessment and authorization

•  Configuration management

•  Identification and authentication

•  Incident response

•  Media protection

•  Physical and environmental protection

•  Risk assessment

•  System and information integrity

With cloud computing environments based on a standardized multi-tier architecture, security control families should be implemented based on the security concerns found throughout each layer. Generally, there are four technology solution domains within cloud-based solutions that describe security concerns and can be used to map security control families appropriately:

•  Presentation services is the interface between end users and the cloud-based solution. The requirements for security controls within this domain will vary given the type of cloud service model provided (i.e., PaaS, IaaS, or SaaS) and the interface with the user (i.e., mobile device, web site, etc.). Examples of security control families within this domain include access controls, identification, and authentication.

•  Application services are the rules and processes behind the presentation services that interact with and manipulate information on behalf of end users. The requirement for security controls within this domain is to ensure that the development processes used to build services within this tier maintain the integrity of the information. Examples of security control families within this domain include configuration management, system, and information integrity.

•  Information services prioritize, simplify, and manage the risk associated with the storage of information. The requirements for security controls within this domain are to properly manage the extraction, transformation, normalization, and loading of information within the technology solution. Examples of security control families within this domain include risk assessment, system, and information integrity.

•  Infrastructure services provide the core technology capabilities required to support the higher-level tiers of a cloud-based solution architecture. The requirement for security controls within this domain is to provide physical security capabilities that match the risk characteristics found throughout the higher-level cloud technology solution domains. Examples of security control families within this domain include media protection, physical, and environmental protection.

Providing a detailed mapping of security control families to the cloud technology solution domain is beyond the scope of this book. Alternatively, the following security reference architectures can be used as guidance for securing cloud computing environments and have also been provided in the Resources chapter at the end of this book:

•  NIST Cloud Computing Security Reference Architecture contains detailed guidance for organizations to adopt best practices and security requirements for cloud service contracts, SLOs, SLAs, and deployment of cloud computing environments.

•  Trusted Cloud Initiative (TRI) SRI provides a comprehensive approach to securing identity-aware cloud ecosystems that combine the best of breed architecture models (i.e., Information Technology Infrastructure Library [ITIL]).

Refer to Chapter 5, “Digital Forensics as a Business,” for further discussion about policies, processes, procedures, and how an organization’s governance framework complements digital forensics.

Refer to Chapter 9, “Determine Collection Requirements,” for further discussion on the administrative, technical, and physical controls requirements for gathering relevant and meaningful digital evidence.

Step #4: Establish Legal Admissibility

As stated previously, some cloud deployments don’t allow customers to have direct access to backend systems and infrastructure. For example, every component of the virtualization architecture—illustrated in Figure 18.2—that the organization does not own or manage introduces various levels of uncertainty around the integrity and authenticity of evidence not within their span of control.

Layers of Trust

Before determining what can be used as an evidential data source, it is important to first understand the layers of trust within the cloud computing environments. Keeping in mind that a rule of thumb of digital forensics is that all investigations will end up in a court of law. And while a judge or jury will ultimately decide whether presented evidence is admissible and will be accepted, there can be varying degrees of confidence about whether cloud-based evidence is accurate and reliable.

For example, in traditional computer forensics where a hard drive has been removed from a standalone computer system for imaging, digital forensics practitioners must trust that their forensics hardware and software are operating as expected; refer to Appendix A, “Tool and Equipment Validation Program,” for further discussion about validation and verification of forensics tools. However, where a computer system exists within a cloud environment, there are new layers of trust introduced that digital forensics practitioners need to consider when collecting digital evidence.

Generally, there are six layers of trust within cloud environments where techniques used to gather digital evidence will differ (see Table 18.1). Working down through the architectural layers of a cloud environment, there are distinct levels of trust in the information within being secure and trustworthy. Ultimately, this means that if there are concerns about the integrity of information at any layer, the courts can render a legal decision not to admit the evidence. As a strategy for addressing these issues, digital forensics practitioners should follow scientifically proven and documented techniques to verify and validate the integrity and authenticity of evidence.

Depending on the nature of the investigation, organizations need to determine which layer of evidence outlined above needs to be collected and preserved. Ultimately, making this assessment involves two key decision criteria: the first being the organization’s technical capability to forensically gather evidence at that layer and the second being the level of trust in the data at that layer. Where cloud environments are located within the organization’s boundaries of control, technical probability and level of trust will be relatively higher because network forensics tools and techniques can be used to gather evidence from known and managed infrastructures.

However, when there is no physical infrastructure present within the enterprise, digital forensics practitioner need to turn to their suite of enterprise security controls to gather network traffic data relevant to the incident or investigation. Doing so requires that a well-defined SOP be in place to gather the maximum amount of evidence possible, while causing minimal impact to the business that maintains both forensics viability and legal

Table 18.1 Cloud Layers of Trust

Cloud Layer

Acquisition Technique

Trust Level

(6) Applications/Data

Subjective to application/data

Guest OS, Hypervisor, Host OS, Hardware, Network

(5) Guest OS

Digital Forensics Tools

Guest OS, Hypervisor, Host OS, Hardware, Network

(4) Virtualization

Introspection

Hypervisor, Host OS, Hardware, Network

(3) Host OS

Access to Virtual Disk

Host OS, Hardware, Network

(2) Physical Hardware

Access to Physical Disk

Hardware, Network

(1) Network

Network Forensics Tools

Network

admissibility. Chain of custody must be strictly enforced to guarantee there is no potential for the integrity or authenticity of this data to be questioned.

Gathering evidence that is located beyond the organization’s network perimeter is the point where contractual agreements with CSPs are factored in to gather evidence. Where CSPs are involved with service offerings, access to evidence might be limited because organizations might have little to no control over the physical infrastructure involved in the incident or investigation.

For the most part, many CSPs do not provide customers with options or the interfaces necessary to gather evidence from these cloud environments, leaving organizations faced with no option but to collect evidence at a high level of abstraction. Given that most cloud ecosystems are implemented using virtualization technologies, the most common form of evidence gathered from cloud environments is in the form of an object or container, such as virtual hard drive images.

Where concerns about trust do exist within cloud-based systems, it is important that the system’s architecture is designed in a way so as to increase the organization’s forensics capabilities and minimize potential matters about evidence integrity and authenticity. Like the multiple layers of interactions found throughout enterprise technology stacks where security controls can be deployed and implemented, as illustrated in Figure 18.3, these security controls can also be leveraged to enhance forensics capabilities in cloud environments by placing controls closer to the actual data as a means of protection.

Refer to Chapter 3, “Digital Evidence Management,” for further discussion about data-centric security.

Refer to Chapter 10, “Establishing Legal Admissibility,” for further discussion about strategies for establishing and maintaining legal admissibility.

Step #5: Establish Secure Storage and Handling

Generally, cloud computing environments are extremely volatile by design, largely due to their dynamic nature and varying levels of trust. Considering the layers of trust illustrated previously in Table 18.1, there can be concerns about the integrity of information at any layer, requiring organizations to implement strategies for addressing these potential issues. Because of this, unlike traditional computing environments, all ESI that would be deemed relevant and meaningful as potential digital evidence needs to have an elevated degree of security controls implemented to safeguard its integrity and authenticity.

For example, hyper-scaling introduces a layer of complexity where data can be accessed and transmitted across virtual resources that are often only available for short-lived periods and are distributed across dynamic infrastructures. Given the potentially volatile conditions of cloud-based systems, it is important that all ESI serving as relevant digital evidence be secured while in use, in transit, and at rest. At a minimum, the following control mechanisms should be implemented (where possible) to guarantee the secure storage and handling of potential evidence in a cloud environment:

•  Real-time logging of technology-generated (i.e., log files) and technology-stored (i.e., word productivity document) data to a remote and centralized repository

•  One-way cryptographic hash algorithm, such as the Message Digest Algorithm family (i.e., MD5, MD6) or the Secure Hashing Algorithm family (i.e., SHA-1, SHA-2, SHA-3), to establish and maintain both the integrity and authenticity of ESI

Refer to Chapter 3, “Digital Evidence Management,” for further discussion about data-centric security, technology-generated data, and technology-stored data.

Refer to Chapter 11, “Establish Secure Storage and Handling,” for further discussion about strategies for establishing and maintaining evidence integrity during handling and storage.

Step #6: Enable Targeted Monitoring

Traditionally, digital evidence was primarily gathered from computer systems such as desktops, laptops, and servers. However, with the widespread use of technology in business operations, every organization will have ESI that is considered potential digital evidence generated across various sources, including cloud infrastructures.

With cloud computing environments, evidence can be located across many distributed systems and devices where, for the most part, access will be outside the organization’s scope of control. Knowing that cloud forensics is a sub-discipline of network forensics, which largely involves post-incident analysis of systems and devices, it is important that network-based evidence sources are included within the scope of investigation involving cloud environments.

Because of this, when conducting a forensics investigation where cloud systems are within scope, it is important that organizations consider that to establish facts and conclusions, they will need to gather and process digital evidence from network-based data sources that cannot be seized without generating some type of business outage or disruption. Background evidence is any ESI that has been created as part of normal business operations that are used during cloud forensics investigation. Examples of this type of evidence include:

•  Network devices such as routers, switches, or firewalls;

•  Application programming interface (API) service calls between cloud systems and applications;

•  Internal calls within the virtual machine (VM) system; or

•  Audit information such as system, application/software, or security logs.

Refer to the Chapter 3, “Digital Evidence Management,” for further discussion about common sources of evidence.

Step #7: Map Investigative Workflows

Gathering and processing digital evidence from cloud environments differs from the traditional approaches of digital forensics where organizations do not, in some cases, own the infrastructural components. With cloud computing based on requirements for broad network access, the application of cloud forensics is therefore a subset of and adaptation of network forensics principles, methodologies, and techniques.

However, knowing that there are potential limitations in conducting cloud forensics, it is necessary to implement guidelines and processes by which the digital forensics practitioner can gather and process potential digital evidence. The high-level digital forensics process model, illustrated previously in Figure 18.2, will be applied to the activities and tasks involved in conducting cloud forensics.

Phase #1: Preparation

As discussed in Chapter 2, “Investigative Process Methodology,” the activities and tasks performed in this first phase are essential in successfully executing all subsequent phases of the investigative workflow. As a component of the preparation phase, organizations can proactively align their people, processes, and technologies to support their cloud forensics capabilities.

Processes and Procedures With cloud forensics being a sub-discipline of network forensics, which is a sub-discipline of digital forensics, the existing baseline of standards, guidelines, and techniques discussed in Chapter 5, “Digital Forensics as a Business,” become the foundation for documentation specific to cloud forensics.

For the most part, the standard operating procedures (SOP) created for digital forensics still apply to cloud forensics. However, given that cloud infrastructure may not be owned by the organization, there is a need to develop specific SOPs so that digital forensics practitioners know how to engage CSPs to facilitate gathering evidence.

Education, Training, and Awareness Like digital forensics, depending on an individual’s role with respect to cloud forensics determines the level of knowledge they are provided. Further discussion about the different levels of education, training, and awareness is found in the section below.

Technology and Toolsets Within the dedicated forensics lab environment, discussed in Chapter 5, “Digital Forensics as a Business,” organizations will need to acquire specific software and hardware to support their cloud forensics capabilities. However, the extent to which an organization invests in such a “toolkit” is dependent on their business environment and the degree to which they need to gather and process digital evidence from cloud environments.

As a subset of network forensics, tools and techniques used for cloud forensics will include a suite of network monitoring and collection utilities that allow digital forensics practitioners to replay and analyze traffic patterns. It is important that all tools used can process large datasets, given the potential volume of traffic on any given network segment, and subsequently pinpoint—with accuracy—where each piece of information was derived from.

Further discussion about digital forensics tools and technologies can be found in Chapter 5, “Digital Forensics as a Business.”

Phase #2: Gathering

As discussed in Chapter 2, “Investigative Process Methodology,” this second phase of the investigative workflow consists of the activities and tasks involved in the identification, collection, and preservation of digital evidence. The same requirements for establishing the integrity, authenticity, and legal admissibility of digital evidence applies for cloud computing. However, given the predominant use of virtualization for cloud-based services and that an investigation might also involve multiple entities (the organization and the CSP), there are additional activities and tasks that need to be performed for cloud forensics.

Identification Largely, the activities and tasks performed here are no different than those discussed in the Chapter 2, “Investigative Process Methodology.” Regardless of the evidence that has been identified, both physical and logical, digital forensics practitioners must follow consistent and repeatable methodologies and techniques to secure, document, and search a crime scene. Sample templates that can be used when securing, documenting, and searching crime scenes have been provided in the Templates section of this book.

Where cloud environments have been identified as relevant to an investigation, and there is some component of the cloud-based service being provided by a CSP, the scope of an investigation widens significantly to include identification of evidence located in sources that are indirectly owned and managed by the organization. Further complicating the scope of an investigation, although organizations have a contractual agreement in place with their direct CSP, most cloud applications often have dependencies on other CSPs that need to be considered.

To establish cloud forensics capabilities that ensure digital evidence will maintain legal admissibility, each CSP must be equipped with educated and experienced resources (people) to assist in all forensics activities, as discussed in the Phase #1: Preparation section above. Establishing where potential digital evidence exists involves working through the order of volatility, discussed in Chapter 2, “Investigative Process Methodology,” subjective to the technology infrastructure involved in the incident or investigation.

Collection and Preservation Referring back to the layers of trust illustrated in Table 18.1, organizations need to determine the order in which the different layers of evidence need to be collected and preserved. Ultimately, making this assessment involves two key decision criteria, the first being the organization’s technical capability to forensically gather evidence at that layer and the second being the level of trust in the data at that layer.

When gathering evidence that is located beyond the organization’s network perimeter, contractual agreements with CSPs are factored in to gather evidence. Where CSPs are involved with service offerings, access to evidence might be limited because organizations might have little to no control over the physical infrastructure involved in the incident or investigation.

Phase #3: Processing

As result of network forensics activities, there will most likely be various datasets from the different network forensics tools that can prove to be valuable for the investigation. As disparate evidence, processing logs can be challenging because, on their own, these datasets cannot be used to establish factual conclusions. This means that all gathered evidence needs to be aggregated into a single dataset so the investigation team can better correlate and establish a chronology so that relevant and meaningful evidence is not lost, skipped, or misunderstood.

Beyond analyzing the network forensics datasets, the tools and equipment used to process virtual hard drive images are for the most part the same as those used for traditional digital forensics, with the exception that there will most likely be only logical evidence, nothing physical (e.g., hard drive). At this stage of the investigation, the traditional methodologies and techniques used to analyze and examine digital evidence should follow SOPs to ensure consistent and repeatable processes are being followed to establish fact-based conclusions.

Phase #4: Presentation

As discussed in Chapter 2, “Investigative Process Methodology,” documentation is a critical element of every investigation and needs to start at the beginning of the investigation and carry on to the end. In this last phase of the investigative workflow, the final investigative report will be created to communicate factual conclusions by demonstrating the processes, techniques, tools, equipment, and interactions used to maintain the authenticity, reliability, and trustworthiness of digital evidence.

Refer to the following section for further discussion about maintaining evidence-based presentation and reporting.

Step #8: Establish Continuing Education

Like digital forensics, depending on an individual’s role with respect to cloud forensics determines the level of knowledge they are provided. Detailed discussion about the diverse levels of education, training, and awareness an organization should require of their people in support of digital forensics can be found in Chapter 14, “Establish Continuing Education.”

General Awareness

As the lowest type of education, this is a generalized level of training and awareness that is designed to provide people with foundational knowledge without getting too deep into cloud computing or cloud forensics. Leveraging the education and training that has already been put in place for digital forensics, this education provides people with the competencies they need about organizational policies, standards, and guidelines so that they indirectly contribute, through some form of behavior or action, to the organization’s digital forensics program.

Examples of topics and subjects that should be included as part of a mobile device forensics awareness program include the following:

•  Business code of conduct

•  Cloud computing acceptable use policy

•  Data protection and privacy

Basic Training

Essentially, the difference between this training and the previous awareness is that the knowledge gained here is intended to teach people the skills necessary to directly support the organization’s digital forensics program as it relates to how, where, and to what extent cloud devices are used for business purposes.

Information communicated at this level is more detailed than the previous type of education because it must provide people with the knowledge required to support a specific role or function, such as managing cloud computing ecosystems.

For example, as part of basic mobile device forensics training, information about audit logging and retention should be covered. Generally, this topic relates to the practice of recording events and preserving them, as per the organizational governance framework, to facilitate digital forensics investigations.

Formal Education

A working and practical knowledge of cloud forensics requires people to first and foremost have the skills and competencies necessary to ensure that all network forensics—as a sub-discipline of digital forensics principles, methodologies, and techniques—is understood. Once the fundamental knowledge is gained, practitioners can then start pursuing knowledge of cloud computing and work towards a specialization in cloud forensics.

However, unlike digital forensics education programs the availability of curriculum dedicated entirely to cloud computing environments is still limited. Most commonly, cloud forensics is taught as a specific course in either higher or post-secondary institutes, or as a professional education module led by an industry-recognized training institute.

Refer to Appendix B, “Education and Professional Certifications,” for a list of higher/post-secondary institutions that offer formal education programs.

Step #9: Maintain Evidence-Based Presentations

Whether hosted internally or with a CSP, the systems and ESI present can be used to commit or be the target of criminal activity. However, perhaps the biggest challenge to a digital forensics investigation where cloud environments are within scope is to determine the “who, where, what, when, why, and how” of cloud-based criminal activity. Some things to consider when writing a final investigative report include:

•  Structure and layout should flow naturally and logically; like how we speak.

•  Content should be clear and concise to accurately demonstrate a chronology of events.

•  Use of jargon, slang, and technical terminology should be limited or avoided. Where used, a glossary should be included to define terms in natural language.

•  Where acronyms and abbreviations are used, they must be written out in full expression on the first use.

•  Because final reports are written after the fact, that is, after an investigation, content should be communicated in the past tense; but the tense can change where conclusions or recommendations are being made.

•  Format the final report not only for distribution within the organization, but also with the mindset that it may be used as testimony in a court of law.

A template for creating written formal reports has been provided as a reference in the Templates section of this book.

Step #10: Ensure Legal Review

The growth of cloud computing has heightened concerns about who has custody over data and where it is located. Before making a strategic decision to move business operations into a cloud computing environment, it is important to answer the question “What data residency concerns do I need to address?” In many countries, there are strict laws and regulations around data residency that prescribe the extent to which data can be stored in other geographical locations.

With this increased utilization of cloud environments, CSPs are opening facilities across multiple regions and countries where several laws and regulations govern the use, transmission, and storage of different types of ESI. For example, the General Data Protection Regulation (GDPR) of the European Union (EU), also known as EU Directive 95/46/EC, was issued to strengthen and unify data protection requirements by giving EU citizens back control of their personal data. Likewise, the Personal Information Protection and Electronic Documents Act (PIPEDA) sets out rules for how private-sector organizations collect, use, and disclose personal information—of customers and employees—as part of their business activities.

Organizations are constantly faced with concerns of data residency and the ways in which the geographically distributed infrastructures may violate various laws and regulations. Where a legal or regulatory violation of data residency has occurred, whether done intentionally or accidentally, the consequences organizations can face include:

•  Financial penalties as result of legal or regulatory fines, compensation to victims, or the cost of remedying the violation

•  Legal ramifications of lawsuits by those whose data was in violation, or law enforcement—and governing—agencies

•  Operational impact due to loss of reputation, customer (client) base, or right to conduct business in certain geographical regions

The reality is that if the data can be accessed, and you demonstrate control over it, local jurisdictions will most likely demand that the data be produced as evidence even if it is stored in another jurisdiction. As a means of mitigating this, and guaranteeing that data is secure in cloud environments, organizations are implementing data-at-rest encryption following a bring your own key (also referred to as bring your own encryption) approach.

Contractual Agreements

When data is transferred to a cloud-based service, the responsibility for protecting and securing the data against loss, damage, or misuse commonly remains the responsibility of the data custodian, that is, the organization. In deployments where the organization relies on a CSP to host or process its data, it is essential (and in most cases legally required) that a written legal agreement be drafted to ensure all parties involved in the cloud-based service offerings will fulfill their responsibilities.

The cornerstone of enabling cloud computing within any enterprise is having a master service agreement (MSA) in place to function as the legal framework under which all parties will operate throughout the course of their relationship. This MSA must contain clauses whereby due diligence (before execution) is defined and continuous audits (during execution) are performed. In addition, it should contain terms and conditions, including the:

•  Objective for having the MSA in place;

•  Duration for which the MSA will govern the relationship;

•  Reason(s) for which termination of the contract can occur—and subsequently the consequences for all parties involved;

•  Structure and system of governance that will be applied—such as monitoring the service (i.e., SLO, SLA) or the rights and responsibilities of all parties involved; and

•  Requirements for supplying, managing, and reporting administrative, technical, and (where feasible) physical security control implementations

Entering a legal agreement with another party should not be done blindly. This means that organizations need to demonstrate due diligence in assessing their business practices, needs, and restrictions so that they have a clear and concise understanding of what is required of them—such as from a compliance standpoint—or what (legal) barriers they may encounter. In some cases, due diligence on the CSP may be necessary to determine whether the provider is fully capable of fulfilling its continued obligations outlined in the agreement.

Most commonly, a formal and complex contract agreement, tailored to meet specific requirements, is negotiated between an organization and the CSP. However, where CSPs provide organizations with a “click-wrap agreement,” careful assessment of risks against benefits needs to be completed to ensure that the provisions of this contract meet the needs and obligations of all parties throughout its lifecycle. If a contractual agreement is entered into such that needs and obligations cannot be addressed, organizations must consider alternatives and not willingly accept the faults of the potential relationship, such as seeking out a CSP who is willing to enter a mutually agreeable contract relationship.

Refer to Chapter 16, “Ensuring Legal Review,” for further discussion about laws, standards, and regulations.

Summary

Cloud computing introduces a unique set of challenges to the digital forensics community because of the shift away from traditional technology architectures, and organizations now have less control over physical infrastructure assets. As a subset of network forensics, and ultimately of digital forensics, organizations must address these concerns head-on by understanding and identifying how (and where) their digital forensics capabilities must adapt to support cloud-based service offerings.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.146.107.55