Enabling Targeted Monitoring

12

Introduction

Until this point, emphasis has been placed on the requirements for organizations to ensure that the digital evidence they gather, in support of the major business risk scenarios, is done in a manner that guarantees it will be admissible in a court of law. In addition to gathering digital evidence for later use in legal proceedings, the aggregation of data sources can also be used to enhance monitoring capabilities to detect potential threats in a more effective and timely manner.

This step is not about simply gathering data for the sake of gathering data. The purpose of this step is about making sure that additional data sources being collected can be effectively used in the process of detecting potential threats. However, determining what is a potential threat is subjective to each organization, as they have their own risk tolerance levels, which begs the question, “At what point should we be suspicious?”

What Is (un)acceptable Activity?

Through the creation of governance documents, such as policies and standards, organizations will define what they consider acceptable and unacceptable activity to be within their business environment. Generally, acceptable activity includes any communication that is within the defined boundaries as stated in the organization’s governance documentation. As an example, using secure email solutions for the transmission of customer information is within the boundaries of acceptable activity.

On the other hand, unacceptable activity includes any communication that is specifically prohibited outside of the defined boundaries as stated in the organization’s governance documentation, such as policy violations, potentially harmful behavior, or breach of confidentiality. Essentially, unacceptable activity is any activity that is not within the confines of what the organization has defined as acceptable. As an example, uploading customer information to cloud storage is unacceptable activity.

To facilitate the monitoring of and alerting about any activity, organizations should explicitly define in their governance documentation what they deem to be acceptable activity so that everyone understands clearly which actions are acceptable and which are not.

Digital Forensics in Enterprise Security

As discussed in Chapter 1, “Understanding Digital Forensics,” the evolution of cybercrime has always accompanied advancements made in technology throughout the decades. However, with the increasing pervasiveness of technology in both the personal and business context, it is extremely important for organizations to have an effective and efficient enterprise security program in place.

In today’s modern threat landscape, many organizations have established an enterprise security program with applicable governance models, security architectures, and strategies to effectively manage their business risk(s). They also seek to mitigate compromising (losing, exposing) those informational assets belonging to—or entrusted to—them. Most commonly, organizations will reference and (where feasible) adopt industry best practices for developing their enterprise security program which, if not suitably tailored to their respective business risks or needs (i.e., legal, regulatory), may not consider the importance of implemented defense-in-depth—administrative, technical, and physical—controls to increase the success rate of digital forensic investigations.

Seeing how digital forensics is considered a sub-discipline of information security, it only seems natural that there is a close relationship between the two. Theoretically, if the defense-in-depth security was impenetrable, the potential for an organization’s informational assets and system to be compromised would be impossible. But the reality is that security can never be completely effective and therefore digital forensics capabilities are essential when security events occur. Generally, when a security event does occur, it is critical that immediate and appropriate actions are taken to reduce impact, recover business functions, and ultimately investigate. If not, there is an increased potential that relevant evidence will be damaged, dismissed, or simply overlooked.

Although digital forensics, information security, and cyber security are viewed as different enterprise disciplines, there are commonalities amongst them that present opportunities for enhancing digital forensics capabilities across the enterprise. As part of the enterprise security program, a primary objective is to achieve assurance that the damage or loss of information assets or system is minimized to within an acceptable level of business risk. One aspect of this comes from having proactive digital forensic capabilities that are intended to maximize the use of potential electronically stored information (ESI) while reducing the cost of investigations. Supporting this, examples of controls that can be implemented throughout the enterprise include:

•  Evidence management framework (i.e., policies, standards, guidance)

•  Administrative, technical, and physical control mechanisms (i.e., operating procedures, tools and equipment, specialized technical skills)

•  Education and training programs (i.e., knowledge, skills)

•  Organizational, regulatory, and legal compliance requirements

For example, a primary objective of an enterprise security program is to achieve assurance that the damage or loss of information assets or systems is minimized to an acceptable level of risk. Alternatively, a primary goal of reactive digital forensic capabilities is to establish fact-based conclusions based on credible evidence. Examples of controls supporting both disciplines can be implemented throughout the enterprise include:

•  Incident response capabilities, such as a security incident response team (SIRT) or computer (security) incident response team (CIRT|CSIRT)

•  Disaster recovery planning

•  Business continuity planning

•  Gap analysis and recommendations

•  Standard operating procedures (e.g., run books)

A proper balance needs to be reached so that where industry best practices are adopted, necessary principles, methodologies, and techniques of digital forensics are incorporated so that activities, such as containment and recovery, account for proper evidence gathering and processing requirements. As an enterprise’s technology footprint changes over time, it is important that the ways in which digital forensics capabilities are integrated also evolve alongside. Naturally, it can be a constant struggle for organizations to effectively enable their required enterprise security capabilities while finding the right balance of how to continuously improve their digital forensic capabilities at the same time. Fortunately, digital forensics principles, methodologies, and techniques have been clearly defined, well-established, and scientifically proven over several decades which allows organizations to integrate them relatively seamlessly to most enterprise architectures.

Within an enterprise environment, digital forensics practitioners play a vital role in the protection of informational assets and systems. Depending on their level of knowledge and experience gained, discussed further in Chapter 14, they can often be viewed as a “jack of all trades” because of the way their role has allowed them to develop skills across many different fields, such as systems development, information technology (IT) architecture, or information and cyber security. As result of their involvement in enterprise investigations, digital forensic practitioners can become deeply immersed in the detailed inner-workings of many different aspects of an enterprise’s business operations.

Information Security vs. Cyber Security

Naturally, every organization is going to speak its own language when it comes to business operations. While this is expected, it is important that every organization speak the same language related to what information security means versus what cyber security means. Contrary to common perception, whether done inadvertently or not, information security and cyber security are not the same. So, this begs the question “What is the difference between information security and cyber security?”

Since the use of computer systems back in the 1960s, as discussed in Chapter 1, information security has been a discipline focused on the security of informational assets or systems, regardless of state (e.g., physical paper, logical databases). As a means of safeguarding informational assets or systems, various technologies, processes (e.g., runbooks), and physical countermeasures (e.g., security checkpoints) are implemented in a defense-in-depth approach to protect physical and electronically stored information from unauthorized access, disruption, modification, or destruction.

With the introduction of inter-connected computer systems and the Internet, the growing need to safeguard information grew beyond standalone computer systems expanded into the digital realm. What has evolved into what is now referred to as cyber security involves the security of informational assets or systems but only in a digital state (e.g., database, financial system). Safeguarding electronically stored information (ESI) involves making use of various technologies and processes to protect networks, information systems, and data from attacks, damage, or unauthorized access.

The reality is, taking a closer look at these disciplines, it’s evident that there are distinctively unique characteristics between the two. Illustrated in Figure 12.1, not only is it reasonable to say that the discipline of cyber security is a subset of information security, which is comprised of a significant overlap in control functions, except where physical security comes into play, but also

Image

Figure 12.1 Information and cyber security.

that there are key characteristics shared across other information-centric operational functions (i.e., business continuity). What also becomes evident is that information technology encompasses only a portion of risk management. This is because complementary administrative and physical controls, not just technology, need to be in place to effectively manage business risk.

Defense-in-Depth

Organizations with effective information security programs have traditionally followed a defense-in-depth strategy that uses multiple layers of security controls. With this traditional approach, their defense-in-depth strategy has commonly focused on defining a physical perimeter as the boundary between the organization’s internal network and the external Internet.

Traditionally, security controls are deployed and implemented throughout the enterprise following an overall defense-in-depth strategy. Illustrated in Figure 12.2 are the different layers found throughout a defense-in-depth strategy where security controls are implemented so that different views into information assets can be seen across the enterprise. Essentially, at the center of the defense-in-depth strategy is where data resides. Moving outwards from data through the layers is where different interactions and the need for deploying appropriate security controls comes into play.

Traditional Security Monitoring

Monitoring activity that is not within the boundaries of what has been defined as acceptable is a type of technical security control. It should be implemented to increase the possibility of positively identifying unacceptable activity, such as threats or attacks (true-positive), while decreasing the probability of alerting against acceptable activity (false-positive). Achieving this goal requires organizations to invest in efforts to understand their business

Image

Figure 12.2 Defense-in-depth layers.

Image

Figure 12.3 Traditional security layers.

environment so monitoring technologies can be customized to efficiently sort out the false-positives and improve visibility into the true-positive alerts.

Deploying security controls, for targeted monitoring activity, must be done as part of an organization’s overall defense-in-depth strategy. Illustrated in Figure 12.3, defense-in-depth security controls implemented throughout the organization help to improve monitoring capabilities by providing different views into the information flows across assets. Most commonly, the following security controls are implemented to facilitate security monitoring:

•  Network devices, such as routers/switches and firewalls, with access controls lists (ACL) to regulate data flows between different security zones (e.g., demilitarized zone (DMZ), between regional offices)

•  Host-based hardening and vulnerability scanning to reduce the potential attack surface by applying configuration changes and software updates

•  Subject authentication and authorization to maintain the principles of least privilege for access to objects

•  Signature-based technologies to detect and mitigate risk of threat actors in both endpoint and network devices, such as anti-malware solutions or intrusion prevention systems (IPS)

Modern Security Monitoring

As the modern threat landscape continues to change and cybercrime evolves, the conventional approach to a defense-in-depth strategy is transforming. More often today, organizations are building business environments that stray away from the concept that there needs to be a defined network perimeter and that all devices are—and will always be—connected to, trusted by, and managed by the organization. The distinction of what used to constitute a defined network perimeter is being driven by the increasing demand for a more accessible and mobile workforce. In turn, this is further driving the need for organizations to allow their data to be accessed from locations and devices that are not guaranteed to be protected by traditional security monitoring strategies.

As modern technologies—such as mobile computing, desktop virtualization, or cloud infrastructures—continue to proliferate as tools for conducting business, organizations are increasingly faced with the need to expose their business records and applications beyond the borders of their traditional network perimeter controls. With this modern change in business practice, deploying security controls under the traditional methodologies will not be as effective in identifying unacceptable activity.

To ensure that security monitoring capabilities adapt to continuously evolving technology, threats, and business practices, organizations need to re-engineer their security controls to concentrate more on the actual data. This is not to say that traditional security controls used for monitoring at the network and end-point layers should be discarded; rather organizations should consider using a new layer of security controls to provide coverage for monitoring closer to the data. In addition to the security controls specified previously, Figure 12.4 illustrates examples of modern security controls that should be considered as part of data-centric security monitoring.

•  Next-generation (next-gen) firewalls combine traditional deep-packet inspection capabilities with applications awareness to better detect and deny malicious or unacceptable activity.

•  Data loss prevention (DLP) use classifiers to detect and prevent potential data exfiltration through data-at-rest, data-in-transit, and data-in-use scenarios.

•  Mobile device management (MDM) allows remote administration and enterprise integration of mobile computing devices such as smartphones and tablets.

•  Content filtering monitors activity and enforces compliance with defined acceptable use policies and standards.

•  Whitelisting, which is the opposite of a known-bad signature-based approach, controls application execution by permitting only trusted and known-good applications to operate.

Image

Figure 12.4 Next-gen security control layers.

Positive Security

Traditional approaches to managing information security through checklists, rules, and compliance cannot keep up with the modern threat landscape or increasing volumes of cyber-related threats and attacks. Essentially, continuing to play the “cat and mouse” game with cyber criminals is not feasible when they have invested significant effort into understanding the strengths and weaknesses of targeted security controls. For example, the current antivirus solutions, which follow a blacklisting approach, suffer from a low detection rate and, as a result, suffer from reduced effectiveness to protect information systems and assets from malicious attacks.

As the effectiveness of blacklisting solutions create greater opportunities for cyber-criminals to be successful in their attacks, attention needs to turn towards security strategies that reduce the overall attack surface(s) by following a risk-based approach. With a risk-based-approach, rather than being concerned with identifying and managing threats through specific technology functionalities, the organization’s overall attack surface is reduced by implementing agnostic solutions that employ “deny by default” mechanisms in more of a whitelisting approach.

Through a risk-based methodology, or “positive security” approach, organizations will begin to realize several business benefits with respect to the protection of their information systems and assets, such as:

•  Displacing security controls (such as anti-virus solutions) that are becoming less effective or are contributing little value to the organization’s overall defense-in-depth strategy;

•  Improving overall performance of network and information systems by eliminating (blacklist) signature databases that consume significant resources;

•  Reducing the strain on supporting infrastructure(s) for deploying (blacklist) signature updates across remote locations; and

•  Enhancing operational efficiencies by lessening the work effort required to reactively maintain security technologies.

Adopting a strategy that follows a positive security methodology aligns with the proven principles of least privilege by enforcing a deny-by-default approach to securing information systems and assets. In modern environments where cyber-attacks and threats are a constantly evolving and moving target, implementing attack-agnostic solutions that reduce the organization’s overall attack surface is a much more effective and sustainable strategy.

Australian Signal Directorate (ASD)

First published in February 2010, the Australian Signal Directorate (ASD) developed a series of prioritized security controls that, when used strategically, help to mitigate cyber security incidents and intrusions, ransomware and external adversaries with destructive intent, malicious insiders, and breach of business records. The guidance was generated as result of the ASD’s experience in operational security, incident management, vulnerability assessments, and penetration testing. With this guidance, it is important to note that there is no single security control that can be implemented to mitigate the risk of a cyber security incident. However, the ASD has proven the effectiveness of implementing the following top four strategies that are proven to be essential in mitigating approximately 85% of cyber security incidents, including:

•  Application whitelisting to permit and trust “known-good” applications while restricting the execution of malicious or unapproved applications

•  Patch applications to mitigate the risk of application vulnerabilities being exploited

•  Patch operating system vulnerabilities to mitigate the risk of operating system vulnerabilities being exploited

•  Restrict administrative privileges to the principle of least privilege access for subjects permitted access to only those objects required to perform their duties

While these top four strategies have been deemed essential security controls, and were declared mandatory for Australian government agencies as of April 2013, organizations can selectively implement any of the top thirty security controls to address specific security needs. Subsequently, in February 2017 the ASD issued a revision to the list which illustrated further break-outs of the top four mitigation strategies across multiple risk areas, including:

1.  Mitigation strategies to prevent malware delivery and execution:

•  Application whitelisting of permitted/trusted applications to prevent the execution of malicious/unapproved applications

•  Patch applications to mitigate the risk of application vulnerabilities being exploited

•  Configure Microsoft Office macro settings to block embedded execution to “trusted locations” (i.e., digitally signed) with limited write access

•  User application hardening to block, disable, or remove unneeded third-party applications featured in productivity suites (e.g., Microsoft Office), web browsers, and viewers (e.g., Flash)

2.  Mitigation strategies to limit the extent of cyber security incidents:

•  Restrict administrative privileges to the principle of least privilege access for subjects permitted access to only those objects required to perform their duties

•  Patch operating system vulnerabilities to mitigate the risk of operating system vulnerabilities being exploited

•  Multi-factor authentication for remote access services (i.e., Virtual Private Networks (VPN)) and for all privileged access or actions (i.e., access to sensitive data)

3.  Mitigation strategies to detect cyber security incidents and respond:

•  Continuous incident detection and response using automated analysis across centralized and time-synchronized log repositories; refer to Chapter 3 for further discussion on security logging

4.  Mitigation strategies to recover data and system availability:

•  Daily backups of data, software, and configuration settings that are retained in offline repositories as per the enterprise’s defined retention period(s)

Incorporating these mitigation strategies identified as “essential” is proven to be effective in preventing a wide range of cyber security risks. Additionally, while not directly related to enabling digital forensics capabilities, the technology-generated logs outputted from these security controls can be useful as potential digital evidence during an investigation.

Analytical Techniques

Approaches to how security monitoring is performed depend on several factors such as the type of security control used or how the vendor designed the functionality of their technology. Regardless, the foundation of security monitoring is based on the concept that unacceptable activity is noticeably discernable from acceptable activity and can be detected because of this difference. While there have been several security monitoring techniques suggested, the following are considered the three major categories used:

Misuse Detection

Misuse detection is a technique that applies correlation between observed activity and known unacceptable or known-bad behavior. While this technique is effective in identifying threats or attacks, successfully performing misuse detection drives the need for organizations to define what they consider to be unacceptable or known-bad activity. Ultimately, it cannot be used to detect unacceptable behavior if such behavior has not been defined, such as through the creation of a signature within an IPS. The typical model of the misuse detection technique consists of the following components:

1.  Information is gathered from different data sources.

2.  Gathered information is translated and structured into a signature that can be interpreted by the applicable technologies.

3.  The signature is applied to analyze activity and characterize unacceptable or known-bad behavior.

4.  Activity detected as matching the signature is captured for reporting.

Within the misuse detection technique, there are five implementation classes for how organizations can apply this approach to their security monitoring:

•  Simple pattern matching uses text-based searching, such as GREP, looking for an exact string of unacceptable or known-bad behavior within a single observed activity.

•  Stateful pattern matching applies the text-based searching capabilities to look for an exact string of unacceptable or known-bad behavior across multiple instances of observed activities.

•  Protocol analysis incorporates a layer of contextual intelligence where the composition of observed activity, such as the domain name system (DNS) protocol, is examined to determine if there are variations in the way it is supposed to be structured.

•  Heuristical analysis involves varying levels of intelligence, learning, and logic about observed activity to determine if unacceptable or known-bad behavior is occurring.

Anomaly Detection

Anomaly detection assumes that all observed activity is unacceptable or known-bad behavior if it is not within the predefined scope of acceptable behavior. This technique starts by establishing a baseline of what acceptable behavior is through the process of observing activity to learn and form an opinion. The typical model of the misuse detection technique consists of the following components:

1.  A baseline of acceptable behavior is established from all observed activity.

2.  The baseline is translated and structured into a model that can be interpreted by the applicable technologies.

3.  Observed activity is compared against the baseline model to determine if a deviation from acceptable behavior exists.

4.  Unacceptable or known-bad behavior is captured for reporting.

While the main advantage of using this technique is that it applies the concept of whitelisting to detect unacceptable or known-bad behavior, versus the use of signatures in misuse detection, the activity used to establish the normal baseline must not contain abnormal activity. It is important when establishing the baseline of acceptable activity that the organization is not currently under an attack. If this is the case, when the system is in the process of learning behavior it may misinterpret unacceptable behavior as acceptable, resulting in increased false-positive or false-negative detections.

Specification-Based Detection

Specification-based detection techniques do not follow the same methodologies as misuse or anomaly detection. Rather, this technique uses behavior specifications, such as the principles of least privilege access, to detect unacceptable or known-bad behavior that does not conform to the intended execution behavior. Essentially, instead of establishing a baseline of acceptable activity through a learning process, stakeholders collectively define the threshold of acceptable behavior that is then used to capture reporting of unacceptable or known-bad behavior.

However, organizations need to consider the effort and complexity of establishing the specifications of a system’s acceptable behavior. Although security monitoring technologies can help to automate the assessment of a system’s behavior, organizations need to ensure that they conduct ongoing reviews to verify that the previously defined specifications for a system’s acceptable behavior remain valid.

Machine Learning

As early as the 1980s, advancements in technology powered a realization that new algorithms used as part of artificial intelligence and machine learning could eventually help in decision-making processes. In the context of digital forensics, the application of machine learning is the process of applying multiple techniques and technologies to better interpret and gain knowledge about evidence from multiple data sources.

Making effective use of machine learning capabilities requires understanding which approach is best suited to the needs of the investigation, including the following:

•  Inductive machine learning involves self-learning based on techniques such as cluster analysis, link analysis, and textual mining analytics.

•  Deductive machine learning involves supervised learning based on rule generators and decision trees.

Generally, machine learning forensics is the capability to recognize patterns across potential digital evidence to reduce or validate the impact of cybercrime; further discussion about applying digital forensics capabilities to mitigate business risk scenarios can be found in Chapter 5. When using machine learning, digital forensics practitioners need to consider using any combination of both inductive and deductive approaches to the fact-based conclusions established during the investigation.

Extractive Forensics

Extracting data from unstructured data sources has become increasingly challenging because of the way in which technology is rapidly advancing and the volume by which data is growing. Extractive forensics involves any combination of techniques that are used to uncover relationships and associations between individuals and ESI (i.e., documents, email messages). For example, the following are techniques commonly used as part of extractive forensics techniques.

Link Analysis Link analysis is used to understand context such as “who knew what, when, and where.” Generally, link analysis assists digital forensics practitioners in discovering relationship by directing them to specific relevant evidence (and other information) of interest that requires further analysis, rather than identifying large data patterns independently.

With link analysis, entities (e.g., individuals, systems) are represented as objects (e.g., circles) and are referred to as nodes. Connecting the nodes together are lines, known as edges, that apply weighting to represent the strength of relationships between nodes, with stronger links having thicker connection lines. With link analysis technologies, graphs are used to visualize the relationships between nodes, their associations, and in some cases the counts and direction of the connections and relationships.

As with any technique and technology, there are limitations with the use of link analysis during a security investigation. For example, while link analysis is great for discovering associations and relationships, its main limitation becomes noticeable as the number of nodes grows too large and graphs become cluttered. Because of this, the use of link analysis during an investigation should be assessed to ensure that its application will simplify and narrow the scope of an investigation.

Text Mining Text mining is a technique used to discover content (and context) hidden within the massive volumes of data sources. As technology continues to advance, organizations are increasingly generating and storing their informational assets in an unstructured format. This technique has become increasingly important because of how it allows digital forensic practitioners to sort and organize large volumes of unstructured ESI through tasks such as taxonomy categorization, concept clustering, and summarization.

Using technology, analyzing substantial amounts of unstructured ESI can be automated to improve pattern and concept identification. Without the use of these tools, digital forensics practitioners could potentially miss or overlook hidden patterns or concepts because of how difficult, if not impossible, it can be to discover these by applying traditional (manual) investigative techniques.

Generally, text mining applies three approaches to sort, organize, and prioritize unstructured ESI: information retrieval, information extraction, and natural language processing. By using these approaches, digital forensics practitioners can convert massive amounts of unstructured ESI into structured models that can be used to enhance analysis and correlation of digital evidence during an investigation.

Inductive Forensics

Creating visual representations and predictive models based on investigative evidence helps digital forensic practitioners establish their fact-based conclusions. Inductive forensics is a form of unsupervised learning that involves performing cluster analysis to group a set of objects into smaller groups based on some level of digital similarities, including the following:

•  Hierarchical clustering builds multi-leveled orders using a tree structure.

•  Mean clustering partitions data into distinct groups based on the point by which all data within a cluster intersects.

•  Gaussian mixture models different clusters as a combination of two or more variables.

•  Self-organizing maps use models to learn the topology and distribution of data.

•  Hidden Markov models apply observed data to recover the sequence of data states.

For digital forensics practitioners, the use of unsupervised learning and cluster analysis provides a way of making observations that can lead to establishing evidence-based facts and conclusions about the investigation. While this type of unsupervised fact discovery can be useful for exploratory analysis, it can lead to a deductive type of analysis that can now be used to establish facts around differences and uniqueness amongst different clusters.

Deductive Forensics

In today’s digital world, it is difficult to exist without having some level of digital profile existing throughout a broad range of technologies. Because of this, the information available during an investigation can allow for patterns to be captured, modeled, and used so that behaviors and the potential of impending security events or incidents can be anticipated.

In the early days of technologies, the vision of what machine learning could become was realized through tools such as decisions trees and mathematical algorithms. With the significant advancements made to technologies from the 1980s and beyond, a new generation of machine learning capabilities was realized with the evolution of clustering and link or text analysis. In this new paradigm of machine learning capabilities, where a variety of techniques are applied in different combinations, organizations are moving further away from being reactive and are increasingly customizing their targeted security monitoring capabilities based on what is learned from deductive learning.

Deductive reasoning works by making generalized information more specific, which is often referred to as a “top-down” approach. It begins by developing a theory and moves into a hypothesis that is testable or verifiable through fact-based observations and evidence. In Figure 12.5, the phases of both inductive and deductive reasoning are illustrated in how inductive and deductive reasoning work simultaneously to form a continuous cycle.

For example, consider what was traditionally used as the underlying process for enabling digital marketers to push products and services—also known as “user and entity behavior analytics” (UEBA). The same behavioral technologies are now applying a variety of extractive and inductive machine learning techniques to track and identify security events as a component of enterprise security, and digital forensics, programs. Generally, UEBA solutions are designed to identify and baseline the behavior of users and entities (i.e., systems, applications) to detect deviations from normal behavior. This is much like the concept of baselining information systems to detect deviations from that baseline with file integrity monitoring (FIM) technologies where profiles are built based on several factors, such as work hours, logical and physical access, or business role.

Image

Figure 12.5 Phases of deductive and inductive reasoning.

Machine learning forensics can be strategically deployed throughout the organization to react and take appropriate action(s) to mitigate security events or incidents from happening and reduce the overall business risk posture. However, it is important that organizations remember that the use of any machine learning tool or technique during an investigation still requires interactions and involvement of digital forensics practitioners. For the time being, human guidance, experience, and contribution are the key success criteria in establishing evidence-based facts and conclusions during any digital forensics investigation.

Implementation Concerns

Before organizations begin practicing any of the analytical techniques discussed previously, it is important that decisions are made regarding the deployment of the security monitoring solution. As a starting point, the implementation of the monitoring solution must be supported through existing governance documentation and justified through a formalized risk assessment, as discussed further in Addendum E, “Risk Assessment.” The following list outlines other areas of consideration that must be decided before a security monitoring solution can be implemented:

•  Ensuring criticality-based deployment of monitoring capabilities to target high-value and high-risk employees, systems, networks, etc.

•  Selecting the combination of analytical techniques to be used as part of the monitoring solution

•  Using a monitoring solution that meets the established service level objectives (SLO) for responding

•  Conducting regular reviews of signatures and detection mechanisms to ensure accurate and timely identification of unacceptable or known-bad behavior.

Summary

Before organizations implement any form of security monitoring, it is important that they understand the scope of what they need to monitor and how they will go about achieving it so they can implement targeted capabilities. Once established, using any combination of analytical techniques to monitor acceptable and unacceptable behavior will improve detection capabilities to identify events and/or incidents before they intensify.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.137.184.29