Chapter 11

Cybersecurity Incident Response

Chapter Objectives

After reading this chapter and completing the exercises, you will be able to do the following:

  • Prepare for a cybersecurity incident.

  • Identify a cybersecurity incident.

  • Understand the incident response plan.

  • Understand the incident response process.

  • Understand information sharing and coordination.

  • Identify incident response team structure.

  • Understand federal and state data breach notification requirements.

  • Consider an incident from the perspective of the victim.

  • Create policies related to security incident management.

Incidents happen. Security-related incidents have become not only more numerous and diverse but also more damaging and disruptive. A single incident can cause the demise of an entire organization. In general terms, incident management is defined as a predicable response to damaging situations. It is vital that organizations have the practiced capability to respond quickly, minimize harm, comply with breach-related state laws and federal regulations, and maintain their composure in the face of an unsettling and unpleasant experience.

FYI: ISO/IEC 27002:2013 and NIST Guidance

Section 16 of ISO 27002:2013: Information Security Incident Management focuses on ensuring a consistent and effective approach to the management of information security incidents, including communication on security events and weaknesses.

Corresponding NIST guidance is provided in the following documents:

  • SP 800-61 Revision 2: “Computer Security Incident Handling Guide”

  • SP 800-83: “Guide to Malware Incident Prevention and Handling”

  • SP 800-86: “Guide to Integrating Forensic Techniques into Incident Response”

Incident Response

Incidents drain resources, can be very expensive, and can divert attention from the business of doing business. Keeping the number of incidents as low as possible should be an organizational priority. That means as much as possible identifying and remediating weaknesses and vulnerabilities before they are exploited. As we discussed in Chapter 4, “Governance and Risk Management,” a sound approach to improving an organizational security posture and preventing incidents is to conduct periodic risk assessments of systems and applications. These assessments should determine what risks are posed by combinations of threats, threat sources, and vulnerabilities. Risks can be mitigated, transferred, or avoided until a reasonable overall level of acceptable risk is reached. However, it is important to realize that users will make mistakes, external events may be out of an organization’s control, and malicious intruders are motivated. Unfortunately, even the best prevention strategy isn’t always enough, which is why preparation is key.

Incident preparedness includes having policies, strategies, plans, and procedures. Organizations should create written guidelines, have supporting documentation prepared, train personnel, and engage in mock exercises. An actual incident is not the time to learn. Incident handlers must act quickly and make far-reaching decisions—often while dealing with uncertainty and incomplete information. They are under a great deal of stress. The more prepared they are, the better the chance that sound decisions will be made.

Computer security incident response is a critical component of information technology (IT) programs. The incident response process and incident handling activities can be very complex. To establish a successful incident response program, you must dedicate substantial planning and resources. Several industry resources were created to help organizations establish a computer security incident response program and learn how to handle cybersecurity incidents efficiently and effectively. One of the best resources available is NIST Special Publication 800-61, which can be obtained from the following URL:

http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-61r2.pdf

NIST developed Special Publication 800-61 due to statutory responsibilities under the Federal Information Security Management Act (FISMA) of 2002, Public Law 107-347.

The benefits of having a practiced incident response capability include the following:

  • Calm and systematic response

  • Minimization of loss or damage

  • Protection of affected parties

  • Compliance with laws and regulations

  • Preservation of evidence

  • Integration of lessons learned

  • Lower future risk and exposure

FYI: United States Computer Emergency Readiness Team (US-CERT)

US-CERT is the 24-hour operational arm of the Department of Homeland Security’s National Cybersecurity and Communications Integration Center (NCCIC). US-CERT accepts, triages, and collaboratively responds to incidents; provides technical assistance to information system operators; and disseminates timely notifications regarding current and potential security threats and vulnerabilities.

US-CERT also distributes vulnerability and threat information through its National Cyber Awareness System (NCAS). There are four mailing lists that anyone can subscribe to:

  • Alerts: Timely information about current security issues, vulnerabilities, and exploits.

  • Bulletins: Weekly summaries of new vulnerabilities. Patch information is provided when available.

  • Tips: Advice about common security issues for the general public.

  • Current Activity: Up-to-date information about high-impact types of security activity affecting the community at large.

To subscribe, visit https://public.govdelivery.com/accounts/USDHSUSCERT/subscriber/new.

What Is an Incident?

A cybersecurity incident is an adverse event that threatens business security and/or disrupts service. Sometimes confused with a disaster, an information security incident is related to loss of confidentiality, integrity, or availability (CIA), whereas a disaster is an event that results in widespread damage or destruction, loss of life, or drastic change to the environment. Examples of incidents include exposure of or modification of legally protected data, unauthorized access to intellectual property, or disruption of internal or external services. The starting point of incident management is to create an organization-specific definition of the term incident so that the scope of the term is clear. Declaration of an incident should trigger a mandatory response process.

Not all security incidents are the same. For example, a breach of personally identifiable information (PII) typically requires strict disclosure under many circumstances. The OMB Memorandum M-07-16, “Safeguarding Against and Responding to the Breach of Personally Identifiable Information,” requires Federal agencies to develop and implement a breach notification policy for PII. Another example is Article 33 of the GDPR, “Notification of a personal data breach to the supervisory authority,” which specifies that any organization under regulation must report a data breach within 72 hours. NIST defines a privacy breach as follows: “when sensitive PII of taxpayers, employees, beneficiaries, etc. was accessed or exfiltrated.” NIST also defines a proprietary breach as when “unclassified proprietary information, such as protected critical infrastructure information (PCII), was accessed or exfiltrated.” An integrity breach is when sensitive or proprietary information was changed or deleted.

Before you learn the details about how to create a good incident response program within your organization, you must understand the difference between security events and security incidents. The following is from NIST Special Publication 800-61:

“An event is any observable occurrence in a system or network. Events include a user connecting to a file share, a server receiving a request for a web page, a user sending email, and a firewall blocking a connection attempt. Adverse events are events with a negative consequence, such as system crashes, packet floods, unauthorized use of system privileges, unauthorized access to sensitive data, and execution of malware that destroys data.”

According to the same document, “a computer security incident is a violation or imminent threat of violation of computer security policies, acceptable use policies, or standard security practices.”

The definition and criteria should be codified in policy. Incident management extends to third-party environments. As we discussed in Chapter 8, “Communications and Operations Security,” business partners and vendors should be contractually obligated to notify the organization if an actual or suspected incident occurs.

Table 11-1 lists a few examples of cybersecurity incidents.

TABLE 11-1 Cybersecurity Incident Examples

Incident

Description

1

Attacker sends a crafted packet to a router and causes a denial of service condition.

2

Attacker compromises a point-of-sale (POS) system and steals credit card information.

3

Attacker compromises a hospital database and steals thousands of health records.

4

Ransomware is installed in a critical server and all files are encrypted by the attacker.

In Practice

Incident Definition Policy

Synopsis: To define organizational criteria pertaining to an information security incident.

Policy Statement:

  • An information security incident is an event that has the potential to adversely impact the company, our clients, our business partners, and/or the public-at-large.

  • An information security incident is defined as the following:

    • Actual or suspected unauthorized access to, compromise of, acquisition of, or modification of protected client or employee data, including but not limited to:

      • – Personal identification numbers, such as social security numbers (SSNs), passport numbers, driver’s license numbers

      • – Financial account or credit card information, including account numbers, card numbers, expiration dates, cardholder names, and service codes

      • – Health care/medical information

    • Actual or suspected event that has the capacity to disrupt the services provided to our clients.

    • Actual or suspected unauthorized access to, compromise of, acquisition of, or modification of company intellectual property.

    • Actual or suspected event that has the capacity to disrupt the company’s ability to provide internal computing and network services.

    • Actual or suspected event that is in violation of legal or statutory requirements.

    • Actual or suspected event not defined above that warrants incident classification as determined by management.

  • All employees, contractors, consultants, vendors, and business partners are required to report known or suspected information security incidents.

  • This policy applies equally to internal and third-party incidents.

Although any number of events could result in an incident, a core group of attacks or situations are most common. Every organization should understand and be prepared to respond to intentional unauthorized access, distributed denial of service (DDoS) attacks, malicious code (malware), and inappropriate usage.

Intentional Unauthorized Access or Use

An intentional unauthorized access incident occurs when an insider or intruder gains logical or physical access without permission to a network, system, application, data, or other resource. Intentional unauthorized access is typically gained through the exploitation of operating system or application vulnerabilities using malware or other targeted exploits, the acquisition of usernames and passwords, the physical acquisition of a device, or social engineering. Attackers may acquire limited access through one vector and use that access to move to the next level.

Denial of Service (DoS) Attacks

A denial of service (DoS) attack is an attack that successfully prevents or impairs the normal authorized functionality of networks, systems, or applications by exhausting resources or in some way obstructs or overloads the communication channel. This attack may be directed at the organization or may be consuming resources as an unauthorized participant in a DoS attack. DoS attacks have become an increasingly severe threat, and the lack of availability of computing and network services now translates to significant disruption and major financial loss.

Note

Refer to Chapter 8 for a description of DOS attacks.

FYI: The Mirai Botnet

The Mirai botnet has been called the Internet of Things (IoT) Botnet responsible for launching the historically large distributed denial-of-service (DDoS) attack against KrebsOnSecurity and several other victims. Mirai is basically malware that compromises networking devices running Linux into remotely controlled bots that can be used as part of a botnet in large-scale DDoS attacks. This malware mostly targets online consumer devices such as IP cameras and home routers. You can access several articles about this botnet and malware at: https://krebsonsecurity.com/tag/mirai-botnet.

FYI: False Positives, False Negatives, True Positives, and True Negatives

The term false positive is a broad term that describes a situation in which a security device triggers an alarm but there is no malicious activity or an actual attack taking place. In other words, false positives are “false alarms,” and they are also called “benign triggers.” False positives are problematic because by triggering unjustified alerts, they diminish the value and urgency of real alerts. If you have too many false positives to investigate, it becomes an operational nightmare, and you most definitely will overlook real security events.

There are also false negatives, which is the term used to describe a network intrusion device’s inability to detect true security events under certain circumstances—in other words, a malicious activity that is not detected by the security device.

A true positive is a successful identification of a security attack or a malicious event. A true negative is when the intrusion detection device identifies an activity as acceptable behavior and the activity is actually acceptable.

Traditional IDS and IPS devices need to be tuned to avoid false positives and false negatives. Next-generation IPSs do not need the same level of tuning compared to a traditional IPS. Also, you can obtain much deeper reports and functionality, including advanced malware protection and retrospective analysis to see what happened after an attack took place.

Traditional IDS and IPS devices also suffer from many evasion attacks. The following are some of the most common evasion techniques against traditional IDS and IPS devices:

  • Fragmentation: When the attacker evades the IPS box by sending fragmented packets.

  • Using low-bandwidth attacks: When the attacker uses techniques that use low-bandwidth or a very small number of packets in order to evade the system.

  • Address spoofing/proxying: Using spoofed IP addresses or sources, as well as using intermediary systems such as proxies to evade inspection.

  • Pattern change evasion: Attackers may use polymorphic techniques to create unique attack patterns.

  • Encryption: Attackers can use encryption to hide their communication and information.

Malware

Malware has become the tool of choice for cybercriminals, hackers, and hacktivists. Malware (malicious software) refers to code that is covertly inserted into another program with the intent of gaining unauthorized access, obtaining confidential information, disrupting operations, destroying data, or in some manner compromising the security or integrity of the victim’s data or system. Malware is designed to function without the user’s knowledge. There are multiple categories of malware, including virus, worm, Trojans, bots, ransomware, rootkits, and spyware/adware. Suspicion of or evidence of malware infection should be considered an incident. Malware that has been successfully quarantined by antivirus software should not be considered an incident.

Note

Refer to Chapter 8 for an extensive discussion of malware.

Inappropriate Usage

An inappropriate usage incident occurs when an authorized user performs actions that violate internal policy, agreement, law, or regulation. Inappropriate usage can be internal facing, such as accessing data when there is clearly not a “need to know.” An example would be when an employee or contractor views a patient’s medical records or a bank customer’s financial records purely for curiosity’s sake, or when the employee or contractor shares information with unauthorized users. Conversely, the perpetrator can be an insider, and the victim can be a third party (for example, the downloading of music or video in violation of copyright laws).

Incident Severity Levels

Not all incidents are equal in severity. Included in the incident definition should be severity levels based on the operational, reputational, and legal impact to the organization. Corresponding to the level should be required response times as well as minimum standards for internal notification. Table 11-2 illustrates this concept.

TABLE 11-2 Incident Severity Level Matrix

An information security incident is any adverse event whereby some aspect of an information system or information itself is threatened. Incidents are classified by severity relative to the impact they have on an organization. This severity level is typically assigned by an incident manager or a cybersecurity investigator. How it is validated depends on the organizational structure and the incident response policy. Each level has a maximum response time and minimum internal notification requirements.

Severity Level = 1

Explanation

Level 1 incidents are defined as those that could cause significant harm to the business, customers, or the public and/or are in violation of corporate law, regulation, or contractual obligation.

Required Response

Time

Immediate.

Required

Internal Notification

Chief Executive Officer.

Chief Operating Officer.

Legal counsel.

Chief Information Security Officer.

Designated incident handler.

Examples

Compromise or suspected compromise of protected customer information.

Theft or loss of any device or media on any device that contains legally protected information.

A denial of service attack.

Identified connection to “command and control” sites.

Compromise or suspected compromise of any company website or web presence.

Notification by a business partner or vendor of a compromise or potential compromise of a customer or customer-related information.

Any act that is in direct violation of local, state, or federal law or regulation.

Severity Level = 2

Explanation

Level 2 incidents are defined as compromise of or unauthorized access to noncritical systems or information; detection of a precursor to a focused attack; a believed threat of an imminent attack; or any act that is a potential violation of law, regulation, or contractual obligation.

Required Response Time

Within four hours.

Required

Internal Notification

Chief Operating Officer.

Legal counsel.

Chief Information Security Officer.

Designated incident handler.

Examples

Inappropriate access to legally protected or proprietary information.

Malware detected on multiple systems.

Warning signs and/or reconnaissance detected related to a potential exploit.

Notification from a third party of an imminent attack.

Severity Level = 3

Explanation

Level 3 incidents are defined as situations that can be contained and resolved by the information system custodian, data/process owner, or HR personnel. There is no evidence or suspicion of harm to customer or proprietary information, processes, or services.

Required Response Time

Within 24 hours.

Required

Internal Notification

Chief Information Security Officer.

Designated incident handler.

Examples

Malware detected and/or suspected on a workstation or device, with no external connections identified.

User access to content or sites restricted by policy.

User’s excessive use of bandwidth or resources.

How Are Incidents Reported?

Incident reporting is best accomplished by implementing simple, easy-to-use mechanisms that can be used by all employees to report the discovery of an incident. Employees should be required to report all actual and suspected incidents. They should not be expected to assign a severity level, because the person who discovers an incident may not have the skill, knowledge, or training to properly assess the impact of the situation.

People frequently fail to report potential incidents because they are afraid of being wrong and looking foolish, they do not want to be seen as a complainer or whistleblower, or they simply don’t care enough and would prefer not to get involved. These objections must be countered by encouragement from management. Employees must be assured that even if they were to report a perceived incident that ended up being a false positive, they would not be ridiculed or met with annoyance. On the contrary, their willingness to get involved for the greater good of the company is exactly the type of behavior the company needs! They should be supported for their efforts and made to feel valued and appreciated for doing the right thing.

Digital forensic evidence is information in digital form found on a wide range of endpoint, server, and network devices—basically, any information that can be processed by a computing device or stored on other media. Evidence tendered in legal cases, such as criminal trials, is classified as witness testimony or direct evidence, or as indirect evidence in the form of an object, such as a physical document, the property owned by a person, and so forth.

Cybersecurity forensic evidence can take many forms, depending on the conditions of each case and the devices from which the evidence was collected. To prevent or minimize contamination of the suspect’s source device, you can use different tools, such as a piece of hardware called a write blocker, on the specific device so you can copy all the data (or an image of the system).

The imaging process is intended to copy all blocks of data from the computing device to the forensics professional evidentiary system. This is sometimes referred to as a “physical copy” of all data, as distinct from a “logical copy,” which copies only what a user would normally see. Logical copies do not capture all the data, and the process will alter some file metadata to the extent that its forensic value is greatly diminished, resulting in a possible legal challenge by the opposing legal team. Therefore, a full bit-for-bit copy is the preferred forensic process. The file created on the target device is called a forensic image file.

Chain of custody is the way you document and preserve evidence from the time that you started the cyber forensics investigation to the time the evidence is presented in court. It is extremely important to be able to show clear documentation of the following:

  • How the evidence was collected

  • When it was collected

  • How it was transported

  • How is was tracked

  • How it was stored

  • Who had access to the evidence and how it was accessed

A method often used for evidence preservation is to work only with a copy of the evidence—in other words, you do not want to work directly with the evidence itself. This involves creating an image of any hard drive or any storage device. Additionally, you must prevent electronic static or other discharge from damaging or erasing evidentiary data. Special evidence bags that are antistatic should be used to store digital devices. It is very important that you prevent electrostatic discharge (ESD) and other electrical discharges from damaging your evidence. Some organizations even have cyber forensic labs that control access to only authorized users and investigators. One method often used involves constructing what is called a Faraday cage. This cage is often built out of a mesh of conducting material that prevents electromagnetic energy from entering into or escaping from the cage. Also, this prevents devices from communicating via Wi-Fi or cellular signals.

What’s more, transporting the evidence to the forensics lab or any other place, including the courthouse, has to be done very carefully. It is critical that the chain of custody be maintained during this transport. When you transport the evidence, you should strive to secure it in a lockable container. It is also recommended that the responsible person stay with the evidence at all times during transportation.

In Practice

Information Security Incident Classification Policy

Synopsis: Classify incidents by severity and assigned response and notification requirements.

Policy Statement:

  • Incidents are to be classified by severity relative to the impact they have on an organization. If there is ever a question as to which level is appropriate, the company must err on the side of caution and assign the higher severity level.

  • Level 1 incidents are defined as those that could cause significant harm to the business, customers, or the public and/or are in violation of corporate law, regulation, or contractual obligation:

    • Level 1 incidents must be responded to immediately upon report.

    • The Chief Executive Officer, Chief Operating Officer, legal counsel, and Chief Information Security Officer must be informed of Level 1 incidents.

  • Level 2 incidents are defined as a compromise of or unauthorized access to noncritical systems or information; detection of a precursor to a focused attack; a believed threat of an imminent attack; or any act that is a potential violation of law, regulation, or contractual obligation:

    • Level 2 incidents must be responded to within four hours.

    • The Chief Operating Officer, legal counsel, and Chief Information Security Officer must be informed of Level 2 incidents.

  • Level 3 incidents are defined as situations that can be contained and resolved by the information system custodian, data/process owner, or HR personnel. There is no evidence or suspicion of harm to customer or proprietary information, processes, or services:

    • Level 3 incidents must be responded to within 24 business hours.

    • The Information Security Officer must be informed of Level 3 incidents.

What Is an Incident Response Program?

An incident response program is composed of policies, plans, procedures, and people. Incident response policies codify management directives. Incident response plans (IRPs) provide a well-defined, consistent, and organized approach for handling internal incidents as well as taking appropriate action when an external incident is traced back to the organization. Incident response procedures are detailed steps needed to implement the plan.

The Incident Response Plan

Having a good incident response plan and incident response process will help you minimize loss or theft of information and disruption of services caused by incidents. It will also help you enhance your incident response program by using lessons learned and information obtained during the security incident.

Section 2.3 of NIST Special Publication 800-61 Revision 2 goes over the incident response policies, plans, and procedures, including information on how to coordinate incidents and interact with outside parties. The policy elements described in NIST Special Publication 800-61 Revision 2 include the following:

  • Statement of management commitment

  • Purpose and objectives of the incident response policy

  • The scope of the incident response policy

  • Definition of computer security incidents and related terms

  • Organizational structure and definition of roles, responsibilities, and levels of authority

  • Prioritization or severity ratings of incidents

  • Performance measures

  • Reporting and contact forms

NIST’s incident response plan elements include the following:

  • Incident response plan’s mission

  • Strategies and goals of the incident response plan

  • Senior management approval of the incident response plan

  • Organizational approach to incident response

  • How the incident response team will communicate with the rest of the organization and with other organizations

  • Metrics for measuring the incident response capability and its effectiveness

  • Roadmap for maturing the incident response capability

  • How the program fits into the overall organization

NIST also defines standard operating procedures (SOPs) as “a delineation of the specific technical processes, techniques, checklists, and forms used by the incident response team. SOPs should be reasonably comprehensive and detailed to ensure that the priorities of the organization are reflected in response operations.” You learned details about SOP in Chapter 8, “Communications and Operations Security.”

In Practice

Cybersecurity Incident Response Program Policy

Synopsis: To ensure that information security incidents are responded to, managed, and reported in a consistent and effective manner.

Policy Statement:

  • An incident response plan (IRP) will be maintained to ensure that information security incidents are responded to, managed, and reported in a consistent and effective manner.

  • The Office of Information Security is responsible for the establishment and maintenance of an IRP.

  • The IRP will, at a minimum, include instructions, procedures, and guidance related to

    • Preparation

    • Detection and investigation

    • Initial response

    • Containment

    • Eradication and recovery

    • Notification

    • Closure and post-incident activity

    • Documentation and evidence handling

  • In accordance with the Information Security Incident Personnel Policy, the IRP will further define personnel roles and responsibilities, including but not limited to incident response coordinators, designated incident handlers, and incident response team members.

  • All employees, contractors, consultants, and vendors will receive incident response training appropriate to their role.

  • The IRP must be annually authorized by the Board of Directors.

The Incident Response Process

NIST Special Publication 800-61 goes over the major phases of the incident response process in detail. You should become familiar with that publication because it provides additional information that will help you succeed in your security operations center (SOC). The important key points are summarized here.

NIST defines the major phases of the incident response process as illustrated in Figure 11-1.

A figure shows the phases of the incident responses.

FIGURE 11-1 NIST Incident Response Process

The Preparation Phase

The preparation phase includes creating and training the incident response team, as well as deploying the necessary tools and resources to successfully investigate and resolve cybersecurity incidents. In this phase, the incident response team creates a set of controls based on the results of risk assessments. The preparation phase also includes the following tasks:

  • Creating processes for incident handler communications and the facilities that will host the security operation center (SOC) and incident response team

  • Making sure that the organization has appropriate incident analysis hardware and software as well as incident mitigation software

  • Creating risk assessment capabilities within the organization

  • Making sure the organization has appropriately deployed host security, network security, and malware prevention solutions

  • Developing user awareness training

The Detection and Analysis Phase

The detection and analysis phase is one of the most challenging phases. Although some incidents are easy to detect (for example, a denial-of-service attack), many breaches and attacks are left undetected for weeks or even months. This is why detection may be the most difficult task in incident response. The typical network is full of blind spots where anomalous traffic goes undetected. Implementing analytics and correlation tools is critical to eliminating these network blind spots. As a result, the incident response team must react quickly to analyze and validate each incident. This is done by following a predefined process while documenting each step the analyst takes. NIST provides various recommendations for making incident analysis easier and more effective:

  • Profile networks and systems

  • Understand normal behaviors

  • Create a log retention policy

  • Perform event correlation

  • Maintain and use a knowledge base of information

  • Use Internet search engines for research

  • Run packet sniffers to collect additional data

  • Filter the data

  • Seek assistance from others

  • Keep all host clocks synchronized

  • Know the different types of attacks and attack vectors

  • Develop processes and procedures to recognize the signs of an incident

  • Understand the sources of precursors and indicators

  • Create appropriate incident documentation capabilities and processes

  • Create processes to effectively prioritize security incidents

  • Create processes to effectively communicate incident information (internal and external communications)

Containment, Eradication, and Recovery

The containment, eradication, and recovery phase includes the following activities:

  • Evidence gathering and handling

  • Identifying the attacking hosts

  • Choosing a containment strategy to effectively contain and eradicate the attack, as well as to successfully recover from it

NIST Special Publication 800-61 Revision 2 also defines the following criteria for determining the appropriate containment, eradication, and recovery strategy:

  • The potential damage to and theft of resources

  • The need for evidence preservation

  • Service availability (for example, network connectivity as well as services provided to external parties)

  • Time and resources needed to implement the strategy

  • Effectiveness of the strategy (for example, partial containment or full containment)

  • Duration of the solution (for example, emergency workaround to be removed in four hours, temporary workaround to be removed in two weeks, or permanent solution)

Post-Incident Activity (Postmortem)

The post-incident activity phase includes lessons learned, how to use collected incident data, and evidence retention. NIST Special Publication 800-61 Revision 2 includes several questions that can be used as guidelines during the lessons learned meeting(s):

  • Exactly what happened, and at what times?

  • How well did the staff and management perform while dealing with the incident?

  • Were the documented procedures followed? Were they adequate?

  • What information was needed sooner?

  • Were any steps or actions taken that might have inhibited the recovery?

  • What would the staff and management do differently the next time a similar incident occurs?

  • How could information sharing with other organizations be improved?

  • What corrective actions can prevent similar incidents in the future?

  • What precursors or indicators should be watched for in the future to detect similar incidents?

  • What additional tools or resources are needed to detect, analyze, and mitigate future incidents?

Tabletop Exercises and Playbooks

Many organizations take advantage of tabletop (simulated) exercises to further test their capabilities. These tabletop exercises are an opportunity to practice and also perform gap analysis. In addition, these exercises may allow them to create playbooks for incident response. Developing a playbook framework makes future analysis modular and extensible. A good playbook typically contains the following information:

  • Report identification

  • Objective statement

  • Result analysis

  • Data query/code

  • Analyst comments/notes

There are significant long-term advantages for having relevant and effective playbooks. When developing playbooks, focus on organization and clarity within your own framework. Having a playbook and detection logic is not enough. The playbook is only a proactive plan. Your plays must actually run to generate results, those results must be analyzed, and remedial actions must be taken for malicious events. This is why tabletop exercises are very important.

Table-top exercises could be technical and also at the executive level. You can create technical simulations for your incident response team and also risk-based exercises for your executive and management staff. A simple methodology for an incident response tabletop exercise includes the following steps:

  1. Preparation: Identify the audience, what you want to simulate, and how the exercise will take place.

  2. Execution: Execute the simulation and record all findings to identify all areas for improvement in your program.

  3. Report: Create a report and distribute it to all the respective stakeholders. Narrow your assessment to specific facets of incident response. You can compare the results with the existing incident response plans. You should also measure the coordination among different teams within the organization and/or external to the organization. Provide a good technical analysis and identify gaps.

Information Sharing and Coordination

During the investigation and resolution of a security incident, you may also need to communicate with outside parties regarding the incident. Examples include, but are not limited to, contacting law enforcement, fielding media inquiries, seeking external expertise, and working with Internet service providers (ISPs), the vendor of your hardware and software products, threat intelligence vendor feeds, coordination centers, and members of other incident response teams. You can also share relevant incident indicator of compromise (IoC) information and other observables with industry peers. A good example of information-sharing communities includes the Financial Services Information Sharing and Analysis Center (FS-ISAC).

Your incident response plan should account for these types of interactions with outside entities. It should also include information about how to interact with your organization’s public relations (PR) department, legal department, and upper management. You should also get their buy-in when sharing information with outside parties to minimize the risk of information leakage. In other words, avoid leaking sensitive information regarding security incidents with unauthorized parties. These actions could potentially lead to additional disruption and financial loss. You should also maintain a list of all the contacts at those external entities, including a detailed list of all external communications for liability and evidentiary purposes.

Computer Security Incident Response Teams

There are different incident response teams. The most popular is the Computer Security Incident Response Team (CSIRT). Others include the following:

  • Product Security Incident Response Team (PSIRT)

  • National CSIRT and Computer Emergency Response Team (CERT)

  • Coordination center

  • The incident response team of a security vendor and Managed Security Service Provider (MSSP)

In this section, you learn about CSIRTs. The rest of the incident response team types are covered in the subsequent sections in this chapter.

The CSIRT is typically the team that works hand in hand with the information security teams (often called InfoSec). In smaller organizations, InfoSec and CSIRT functions may be combined and provided by the same team. In large organizations, the CSIRT focuses on the investigation of computer security incidents, whereas the InfoSec team is tasked with the implementation of security configurations, monitoring, and policies within the organization.

Establishing a CSIRT involves the following steps:

STEP 1. Defining the CSIRT constituency.

STEP 2. Ensuring management and executive support.

STEP 3. Making sure that the proper budget is allocated.

STEP 4. Deciding where the CSIRT will reside within the organization’s hierarchy.

STEP 5. Determining whether the team will be central, distributed, or virtual.

STEP 6. Developing the process and policies for the CSIRT.

It is important to recognize that every organization is different, and these steps can be accomplished in parallel or in sequence. However, defining the constituency of a CSIRT is certainly one of the first steps in the process. When defining the constituency of a CSIRT, one should answer the following questions:

  • Who will be the “customer” of the CSIRT?

  • What is the scope? Will the CSIRT cover only the organization or also entities external to the organization? For example, at Cisco, all internal infrastructure and Cisco’s websites and tools (that is, cisco.com) are a responsibility of the Cisco CSIRT, and any incident or vulnerability concerning a Cisco product or service is the responsibility of the Cisco PSIRT.

  • Will the CSIRT provide support for the complete organization or only for a specific area or segment? For example, an organization may have a CSIRT for traditional infrastructure and IT capabilities and a separate one dedicated to cloud security.

  • Will the CSIRT be responsible for part of the organization or all of it? If external entities will be included, how will they be selected?

Determining the value of a CSIRT can be challenging. One of the main questions that executives will ask is, what is the return on investment for having a CSIRT? The main goals of the CSIRT are to minimize risk, contain cyber damage, and save money by preventing incidents from happening—and when they do occur, to mitigate them efficiently. For example, the smaller the scope of the damage, the less money you need to spend to recover from a compromise (including brand reputation). Many studies in the past have covered the cost of security incidents and the cost of breaches. Also, the Ponemon Institute periodically publishes reports covering these costs. It is a good practice to review and calculate the “value add” of the CSIRT. This calculation can be used to determine when to invest more, not only in a CSIRT, but also in operational best practices. In some cases, an organization might even outsource some of the cybersecurity functions to a managed service provider, if the organization cannot afford or retain security talent.

An incident response team must have several basic policies and procedures in place to operate satisfactorily, including the following:

  • Incident classification and handling

  • Information classification and protection

  • Information dissemination

  • Record retention and destruction

  • Acceptable usage of encryption

  • Engaging and cooperating with external groups (other IRTs, law enforcement, and so on)

Also, some additional policies or procedures can be defined, such as the following:

  • Hiring policy

  • Using an outsourcing organization to handle incidents

  • Working across multiple legal jurisdictions

Even more policies can be defined depending on the team’s circumstances. The important thing to remember is that not all policies need to be defined on the first day.

The following are great sources of information from the International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) that you can leverage when you are constructing your policy and procedure documents:

  • ISO/IEC 27001:2005: “Information Technology—Security Techniques—Information Security Management Systems—Requirements”

  • ISO/IEC 27002:2005: “Information Technology—Security Techniques—Code of Practice for Information Security Management”

  • ISO/IEC 27005:2008: “Information Technology—Security techniques—Information Security Risk Management”

  • ISO/PAS 22399:2007: “Societal Security—Guidelines for Incident Preparedness and Operational Continuity Management”

  • ISO/IEC 27033: “Information Technology—Security Techniques—Information Security Incident Management”

CERT provides a good overview of the goals and responsibilities of a CSIRT at the following site: https://www.cert.org/incident-management/csirt-development/csirt-faq.cfm.

Product Security Incident Response Teams (PSIRTs)

Software and hardware vendors may have separate teams that handle the investigation, resolution, and disclosure of security vulnerabilities in their products and services. Typically, these teams are called Product Security Incident Response Teams (PSIRTs). Before you can understand how a PSIRT operates, you must understand what constitutes security vulnerability.

The U.S. National Institute of Standards and Technology (NIST) defines a security vulnerability as follows:

“A flaw or weakness in system security procedures, design, implementation, or internal controls that could be exercised (accidentally triggered or intentionally exploited) and result in a security breach or a violation of the system’s security policy.”

There are many more definitions, but they tend to be variations on the one from the NIST.

Security Vulnerabilities and Their Severity

Why are product security vulnerabilities a concern? Because each vulnerability represents a potential risk that threat actors can use to compromise your systems and your network. Each vulnerability carries an associated amount of risk with it. One of the most widely adopted standards to calculate the severity of a given vulnerability is the Common Vulnerability Scoring System (CVSS), which has three components: base, temporal, and environmental scores. Each component is presented as a score on a scale from 0 to 10.

CVSS is an industry standard maintained by FIRST that is used by many PSIRTs to convey information about the severity of vulnerabilities they disclose to their customers.

In CVSS, a vulnerability is evaluated under three aspects and a score is assigned to each of them:

  • The base group represents the intrinsic characteristics of a vulnerability that are constant over time and do not depend on a user-specific environment. This is the most important information and the only one that’s mandatory to obtain a vulnerability score.

  • The temporal group assesses the vulnerability as it changes over time.

  • The environmental group represents the characteristics of a vulnerability, taking into account the organizational environment.

The score for the base group is between 0 and 10, where 0 is the least severe and 10 is assigned to highly critical vulnerabilities. For example, a highly critical vulnerability could allow an attacker to remotely compromise a system and get full control. Additionally, the score comes in the form of a vector string that identifies each of the components used to make up the score.

The formula used to obtain the score takes into account various characteristics of the vulnerability and how the attacker is able to leverage these characteristics.

CVSSv3 defines several characteristics for the base, temporal, and environmental groups.

The base group defines Exploitability metrics that measure how the vulnerability can be exploited, as well as Impact metrics that measure the impact on confidentiality, integrity, and availability. In addition to these two metrics, a metric called Scope Change (S) is used to convey impact on systems that are impacted by the vulnerability but do not contain vulnerable code.

The Exploitability metrics include the following:

  • Attack Vector (AV) represents the level of access an attacker needs to have to exploit a vulnerability. It can assume four values:

    • Network (N)

    • Adjacent (A)

    • Local (L)

    • Physical (P)

  • Attack Complexity (AC) represents the conditions beyond the attacker’s control that must exist in order to exploit the vulnerability. The values can be the following:

    • Low (L)

    • High (H)

  • Privileges Required (PR) represents the level of privileges an attacker must have to exploit the vulnerability. The values are as follows:

    • None (N)

    • Low (L)

    • High (H)

  • User Interaction (UI) captures whether a user interaction is needed to perform an attack. The values are as follows:

    • None (N)

    • Required (R)

  • Scope (S) captures the impact on systems other than the system being scored. The values are as follows:

    • Unchanged (U)

    • Changed (C)

The Impact metrics include the following:

  • Confidentiality (C) measures the degree of impact to the confidentiality of the system. It can assume the following values:

    • Low (L)

    • Medium (M)

    • High (H)

  • Integrity (I) measures the degree of impact to the integrity of the system. It can assume the following values:

    • Low (L)

    • Medium (M)

    • High (H)

  • Availability (A) measures the degree of impact to the availability of the system. It can assume the following values:

    • Low (L)

    • Medium (M)

    • High (H)

The temporal group includes three metrics:

  • Exploit Code Maturity (E), which measures whether or not public exploit is available

  • Remediation Level (RL), which indicates whether a fix or workaround is available

  • Report Confidence (RC), which indicates the degree of confidence in the existence of the vulnerability

The environmental group includes two main metrics:

  • Security Requirements (CR, IR, AR), which indicate the importance of confidentiality, integrity, and availability requirements for the system

  • Modified Base Metrics (MAV, MAC, MAPR, MUI, MS, MC, MI, MA), which allow the organization to tweak the base metrics based on specific characteristics of the environment

For example, a vulnerability that might allow a remote attacker to crash the system by sending crafted IP packets would have the following values for the base metrics:

  • Access Vector (AV) would be Network because the attacker can be anywhere and can send packets remotely.

  • Attack Complexity (AC) would be Low because it is trivial to generate malformed IP packets (for example, via the Scapy Python tool).

  • Privilege Required (PR) would be None because there are no privileges required by the attacker on the target system.

  • User Interaction (UI) would also be None because the attacker does not need to interact with any user of the system to carry out the attack.

  • Scope (S) would be Unchanged if the attack does not cause other systems to fail.

  • Confidentiality Impact (C) would be None because the primary impact is on the availability of the system.

  • Integrity Impact (I) would be None because the primary impact is on the availability of the system.

  • Availability Impact (A) would be High because the device could become completely unavailable while crashing and reloading.

Additional examples of CVSSv3 scoring are available at the FIRST website (https://www.first.org/cvss).

Vulnerability Chaining Role in Fixing Prioritization

In numerous instances, security vulnerabilities are not exploited in isolation. Threat actors exploit more than one vulnerability in a chain to carry out their attack and compromise their victims. By leveraging different vulnerabilities in a chain, attackers can infiltrate progressively further into the system or network and gain more control over it. This is something that PSIRT teams must be aware of. Developers, security professionals, and users must be aware of this because chaining can change the order in which a vulnerability needs to be fixed or patched in the affected system. For instance, multiple low-severity vulnerabilities can become a severe one if they are combined.

Performing vulnerability chaining analysis is not a trivial task. Although several commercial companies claim that they can easily perform chaining analysis, in reality the methods and procedures that can be included as part of a chain vulnerability analysis are pretty much endless. PSIRT teams should utilize an approach that works for them to achieve the best end result.

Fixing Theoretical Vulnerabilities

Exploits cannot exist without a vulnerability. However, there isn’t always an exploit for a given vulnerability. Earlier in this chapter you were reminded of the definition of a vulnerability. As another reminder, an exploit is not a vulnerability. An exploit is a concrete manifestation, either a piece of software or a collection of reproducible steps, that leverage a given vulnerability to compromise an affected system.

In some cases, users call vulnerabilities without exploits “theoretical vulnerabilities.” One of the biggest challenges with “theoretical vulnerabilities” is that there are many smart people out there capable of exploiting them. If you do not know how to exploit a vulnerability today, it does not mean that someone else will not find a way in the future. In fact, someone else may already have found a way to exploit the vulnerability and perhaps is even selling the exploit of the vulnerability in underground markets without public knowledge.

PSIRT personnel should understand there is no such thing as an “entirely theoretical” vulnerability. Sure, having a working exploit can ease the reproducible steps and help to verify whether that same vulnerability is present in different systems. However, because an exploit may not come as part of a vulnerability, you should not completely deprioritize it.

Internally Versus Externally Found Vulnerabilities

A PSIRT can learn about a vulnerability in a product or service during internal testing or during the development phase. However, vulnerabilities can also be reported by external entities, such as security researchers, customers, and other vendors.

The dream of any vendor is to be able to find and patch all security vulnerabilities during the design and development phases. However, that is close to impossible. On the other hand, that is why a secure development life cycle (SDL) is extremely important for any organization that produces software and hardware. Cisco has an SDL program that is documented at the following URL: www.cisco.com/c/en/us/about/security-center/security-programs/secure-development-lifecycle.html.

Cisco defines its SDL as “a repeatable and measurable process we’ve designed to increase the resiliency and trustworthiness of our products.” Cisco’s SDL is part of Cisco Product Development Methodology (PDM) and ISO 9000 compliance requirements. It includes, but is not limited to, the following:

  • Base product security requirements

  • Third-party software (TPS) security

  • Secure design

  • Secure coding

  • Secure analysis

  • Vulnerability testing

The goal of the SDL is to provide tools and processes that are designed to accelerate the product development methodology, by developing secure, resilient, and trustworthy systems. TPS security is one of the most important tasks for any organization. Most of today’s organizations use open source and third-party libraries. This approach creates two requirements for the product security team. The first is to know what TPS libraries are used, reused, and where. The second is to patch any vulnerabilities that affect such library or TPS components. For example, if a new vulnerability in OpenSSL is disclosed, what do you have to do? Can you quickly assess the impact of such a vulnerability in all your products?

If you include commercial TPS, is the vendor of such software transparently disclosing all the security vulnerabilities, including in their software? Nowadays, many organizations are including security vulnerability disclosure SLAs in their contracts with third-party vendors. This is very important because many TPS vulnerabilities (both commercial and open source) go unpatched for many months—or even years.

TPS software security is a monumental task for any company of any size. To get a feeling of the scale of TPS code usage, visit the third-party security bulletins published by Cisco at https://tools.cisco.com/security/center/publicationListing.x?product=NonCisco#~Vulnerabilities. Another good resource is CVE Details (www.cvedetails.com).

Many tools are available on the market today to enumerate all open source components used in a product. These tools either interrogate the product source code or scan binaries for the presence of TPS. The following are a few examples:

National CSIRTs and Computer Emergency Response Teams (CERTS)

Numerous countries have their own Computer Emergency Response (or Readiness) Teams. Examples include the US-CERT (https://www.us-cert.gov), Indian Computer Emergency Response Team (http://www.cert-in.org.in), CERT Australia (https://cert.gov.au), and the Australian Computer Emergency Response Team (https://www.auscert.org.au/). The Forum of Incident Response and Security Teams (FIRST) website includes a list of all the national CERTS and other incident response teams at https://www.first.org/members/teams.

These national CERTS and CSIRTs aim to protect their citizens by providing security vulnerability information, security awareness training, best practices, and other information. For example, the following is the US-CERT mission posted at https://www.us-cert.gov/about-us:

“US-CERT’s critical mission activities include:

  • Providing cybersecurity protection to Federal civilian executive branch agencies through intrusion detection and prevention capabilities.

  • Developing timely and actionable information for distribution to federal departments and agencies; state, local, tribal and territorial (SLTT) governments; critical infrastructure owners and operators; private industry; and international organizations.

  • Responding to incidents and analyzing data about emerging cyber threats.

  • Collaborating with foreign governments and international entities to enhance the nation’s cybersecurity posture.”

Coordination Centers

Several organizations around the world also help with the coordination of security vulnerability disclosures to vendors, hardware and software providers, and security researchers.

One of the best examples is the CERT Division of the Software Engineering Institute (SEI). CERT provides security vulnerability coordination and research. It is an important stakeholder of multi-vendor security vulnerability disclosures and coordination. Additional information about CERT can be obtained at their website at https://www.sei.cmu.edu/about/divisions/cert/index.cfm#cert-division-what-we-do.

Incident Response Providers and Managed Security Service Providers (MSSPs)

Cisco, along with several other vendors, provides incident response and managed security services to its customers. These incident response teams and outsourced CSIRTs operate a bit differently because their task is to provide support to their customers. However, they practice the tasks outlined earlier in this chapter for incident response and CSIRTs.

The following are examples of these teams and their services:

  • The Cisco Incident Response Service: Provides Cisco customers with readiness or proactive services and post-breach support. The proactive services include infrastructure breach preparedness assessments, security operations readiness assessments, breach communications assessment, and security operations and incident response training. The post-breach (or reactive) services include the evaluation and investigation of the attack, countermeasure development and deployment, as well as the validation of the countermeasure effectiveness.

  • FireEye Incident Response Services.

  • Crowdstrike Incident Response Services.

  • SecureWorks Managed Security Services.

  • Cisco’s Active Threat Analytics (ATA) managed security service.

Managed services, such as the SecureWorks Managed Security Services, Cisco ATA, and others, offer customers 24-hour continuous monitoring and advanced-analytics capabilities, combined with threat intelligence as well as security analysts and investigators to detect security threats in customer networks. Outsourcing has long been a practice for many companies, but the onset of the complexity of cybersecurity has allowed it to bloom and become bigger as the years go by in the world of incident response.

Key Incident Management Personnel

Key incident management personnel include incident response coordinators, designated incident handlers, incident response team members, and external advisors. In various organizations, they may have different titles, but the roles are essentially the same.

The incident response coordinator (IRC) is the central point of contact for all incidents. Incident reports are directed to the IRC. The IRC verifies and logs the incident. Based on predefined criteria, the IRC notifies appropriate personnel, including the designated incident handler (DIH). The IRC is a member of the incident response team (IRT) and is responsible for maintaining all non-evidence-based incident-related documentation.

Designated incident handlers (DIHs) are senior-level personnel who have the crisis management and communication skills, experience, knowledge, and stamina to manage an incident. DIHs are responsible for three critical tasks: incident declaration, liaison with executive management, and managing the incident response team (IRT).

The incident response team (IRT) is a carefully selected and well-trained team of professionals that provides services throughout the incident life cycle. Depending on the size of the organization, there may be a single team or multiple teams, each with its own specialty. The IRT members generally represent a cross-section of functional areas, including senior management, information security, information technology (IT), operations, legal, compliance, HR, public affairs and media relations, customer service, and physical security. Some members may be expected to participate in every response effort, whereas others (such as compliance) may restrict involvement to relevant events. The team as directed by the DIH is responsible for further analysis, evidence handling and documentation, containment, eradication and recovery, notification (as required), and post-incident activities.

Tasks assigned to the IRT include but are not limited to the following:

  • Overall management of the incident

  • Triage and impact analysis to determine the extent of the situation

  • Development and implementation of containment and eradication strategies

  • Compliance with government and/or other regulations

  • Communication and follow-up with affected parties and/or individuals

  • Communication and follow-up with other external parties, including the Board of Directors, business partners, government regulators (including federal, state, and other administrators), law enforcement, representatives of the media, and so on, as needed

  • Root cause analysis and lessons learned

  • Revision of policies/procedures necessary to prevent any recurrence of the incident

Figure 11-2 illustrates the incident response roles and responsibilities.

A figure shows the incident response roles and responsibilities.

FIGURE 11-2 Incident Response Roles and Responsibilities

Incident Response Training and Exercises

Establishing a robust response capability ensures that the organization is prepared to respond to an incident swiftly and effectively. Responders should receive training specific to their individual and collective responsibilities. Recurring tests, drills, and challenging incident response exercises can make a huge difference in responder ability. Knowing what is expected decreases the pressure on the responders and reduces errors. It should be stressed that the objective of incident response exercises isn’t to get an “A” but rather to honestly evaluate the plan and procedures, to identify missing resources, and to learn to work together as a team.

In Practice

Incident Response Authority Policy

Synopsis: To vest authority in those charged with responding to and/or managing an information security incident.

Policy Statement:

  • The Chief Information Security Officer has the authority to appoint IRC, DIHs, and IRT members:

    • All responders must receive training commensurate with their role and responsibilities.

    • All responders must participate in recurring drills and exercises.

  • During a security incident, as well as during drills and exercises, incident management and incident response–related duties supersede normal duties.

  • The Chief Operating Office and/or legal counsel have the authority to notify law enforcement or regulatory officials.

  • The Chief Operating Officer, Board of Directors, and/or legal counsel have the authority to engage outside personnel, including but not limited to forensic investigators, experts in related fields (such as security, technology, and compliance), and specialized legal counsel.

What Happened? Investigation and Evidence Handling

The primary reason for gathering evidence is to figure out what happened in order to contain and resolve the incident as quickly as possible. As an incident responder, it is easy to get caught up in the moment. It may not be apparent that careful evidence acquisition, handling, and documentation are important or even necessary. Consider the scenario of a workstation malware infection. The first impression may be that the malware download was inadvertent. This could be true, or perhaps it was the work of a malicious insider or careless business vendor. Until you have the facts, you don’t know. Regardless of the source, if the malware infection resulted in a compromise of legally protected information, the company could be a target of a negligence lawsuit or regulatory action, in which case evidence of how the infection was contained and eradicated could be used to support the company’s position. Because there are so many variables, by default, data handlers should treat every investigation as if it would lead to a court case.

Documenting Incidents

The initial documentation should create an incident profile. The profile should include the following:

  • How was the incident detected?

  • What is the scenario for the incident?

  • What time did the incident occur?

  • Who or what reported the incident?

  • Who are the contacts for involved personnel?

  • A brief description of the incident.

  • Snapshots of all on-scene conditions.

All ongoing incident response–related activity should be logged and time-stamped. In addition to actions taken, the log should include decisions, record of contact (internal and external resources), and recommendations.

Documentation specific to computer-related activity should be kept separate from general documentation because of the confidential nature of what is being performed and/or found. All documentation should be sequential and time/date stamped, and should include exact commands entered into systems, results of commands, actions taken (for example, logging on, disabling accounts, applying router filters) as well as observations about the system and/or incident. Documentation should occur as the incident is being handled, not after.

Incident documentation should not be shared with anyone outside the team without the express permission of the DIH or executive management. If there is any expectation that the network has been compromised, documentation should not be saved on a network-connected device.

Working with Law Enforcement

Depending on the nature of the situation, it may be necessary to contact local, state, or federal law enforcement. The decision to do so should be discussed with legal counsel. It is important to recognize that the primary mission of law enforcement is to identify the perpetrators and build a case. There may be times when the law enforcement agency requests that the incident or attack continue while they work to gather evidence. Although this objective appears to be at odds with the organizational objective to contain the incident, it is sometimes the best course of action. The IRT should become acquainted with applicable law enforcement representatives before an incident occurs to discuss the types of incidents that should be reported to them, who to contact, what evidence should be collected, and how it should be collected.

If the decision is made to contact law enforcement, it is important to do so as early in the response life cycle as possible while the trail is still hot. On a federal level, both the Secret Service and the Federal Bureau of Investigation (FBI) investigate cyber incidents. The Secret Service’s investigative responsibilities extend to crimes that involve financial institution fraud, computer and telecommunications fraud, identity theft, access device fraud (for example, ATM or point of sale systems), electronic funds transfers, money laundering, corporate espionage, computer system intrusion, and Internet-related child pornography and exploitation. The FBI’s investigation responsibilities include cyber-based terrorism, espionage, computer intrusions, and major cyber fraud. If the missions appear to overlap, it is because they do. Generally, it is best to reach out to the local Secret Service or FBI office and let them determine jurisdiction.

FYI: The Authors of the Mirai Botnet Plead Guilty

In late 2017, the authors of the Mirai malware and botnet (21-year-old Paras Jha from Fanwood, N.J., and Josiah White, 20, from Washington, Pennsylvania) pleaded guilty. Mirai is malware that compromised hundreds of thousands of Internet of Things devices, such as security cameras, routers, and digital video recorders for use in large-scale attacks.

KrebsOnSecurity published the results of that four-month inquiry, “Who Is Anna Senpai, the Mirai Worm Author?” The story is easily the longest in this site’s history, and it cited a bounty of clues pointing back to Jha and White. Additional details can be found at: https://krebsonsecurity.com/2017/12/mirai-iot-botnet-co-authors-plead-guilty/.

Understanding Forensic Analysis

Forensics is the application of science to the identification, collection, examination, and analysis of data while preserving the integrity of the information. Forensic tools and techniques are often used to find the root cause of an incident or to uncover facts. In addition to reconstructing security incidents, digital forensic techniques can be used for investigating crimes and internal policy violations, troubleshooting operational problems, and recovering from accidental system damage.

As described in NIST Special Publication 800-87, the process for performing digital forensics includes collection, examination, analysis, and reporting:

  • Collection: The first phase in the process is to identify, label, record, and acquire data from the possible sources of relevant data, while following guidelines and procedures that preserve the integrity of the data. Collection is typically performed in a timely manner because of the likelihood of losing dynamic data, such as current network connections, as well as losing data from battery-powered devices.

  • Examination: Examinations involve forensically processing large amounts of collected data using a combination of automated and manual methods to assess and extract data of particular interest, while preserving the integrity of the data.

  • Analysis: The next phase of the process is to analyze the results of the examination, using legally justifiable methods and techniques, to derive useful information that addresses the questions that were the impetus for performing the collection and examination.

  • Reporting: The final phase is reporting the results of the analysis, which may include describing the actions used, explaining how tools and procedures were selected, determining what other actions need to be performed, and providing recommendations for improvement to policies, guidelines, procedures, tools, and other aspects of the forensic process. The formality of the reporting step varies greatly depending on the situation.

Incident handlers performing forensic tasks need to have a reasonably comprehensive knowledge of forensic principles, guidelines, procedures, tools, and techniques, as well as antiforensic tools and techniques that could conceal or destroy data. It is also beneficial for incident handlers to have expertise in information security and specific technical subjects, such as the most commonly used operating systems, file systems, applications, and network protocols within the organization. Having this type of knowledge facilitates faster and more effective responses to incidents. Incident handlers also need a general, broad understanding of systems and networks so that they can determine quickly which teams and individuals are well suited to providing technical expertise for particular forensic efforts, such as examining and analyzing data for an uncommon application.

FYI: CCFP—Certified Cyber Forensics Professional

The CCFP certification is offered by (ISC)2. According to the ISC, “the Certified Cyber Forensics Professional (CCFP) credential indicates expertise in forensics techniques and procedures, standards of practice, and legal and ethical principles to assure accurate, complete, and reliable digital evidence admissible to a court of law. It also indicates the ability to apply forensics to other information security disciplines, such as e-discovery, malware analysis, or incident response.” To learn more, go to https://www.isc2.org/ccfp/.

All (ISC)2 certifications are accredited by the American National Standards Institute (ANSI) to be in compliance with the International Organization for Standardization and International Electrotechnical Commission (ISO/IEC) 17024 Standards.

Understanding Chain of Custody

Chain of custody applies to physical, digital, and forensic evidence. Evidentiary chain of custody is used to prove that evidence has not been altered from the time it was collected through production in court. This means that the moment evidence is collected, every transfer of evidence from person to person must be documented, and it must be provable that nobody else could have accessed that evidence. In the case of legal action, the chain of custody documentation will be available to opposing counsel through the information discovery process and may become public. Confidential information should be included in the document only if absolutely necessary.

To maintain an evidentiary chain, a detailed log should be maintained that includes the following information:

  • Where and when (date and time) evidence was discovered.

  • Identifying information such as the location, serial number, model number, host name, media access control (MAC) address, and/or IP address.

  • Name, title, and phone number of each person who discovered, collected, handled, or examined the evidence.

  • Where evidence was stored/secured and during what time period.

  • If the evidence has changed custody, how and when the transfer occurred (include shipping numbers, and so on).

The relevant person should sign and date each entry in the record.

Storing and Retaining Evidence

It is not unusual to retain all evidence for months or years after the incident ends. Evidence, logs, and data associated with the incident should be placed in tamper-resistant containers, grouped together, and put in a limited-access location. Only incident investigators, executive management, and legal counsel should have access to the storage facility. If and when evidence is turned over to law enforcement, an itemized inventory of all the items should be created and verified with the law enforcement representative. The law enforcement representative should sign and date the inventory list.

Evidence needs to be retained until all legal actions have been completed. Legal action could be civil, criminal, regulatory, or personnel-related. Evidence-retention parameters should be documented in policy. Retention schedules should include the following categories: internal only, civil, criminal, regulatory, personnel-related incident, and to-be-determined (TBD). When categorization is in doubt, legal counsel should be consulted. If there is an organizational retention policy, a notation should be included that evidence-retention schedules (if longer) supersede operational or regulatory retention requirements.

In Practice

Evidence Handling and Use Policy

Synopsis: To ensure that evidence is handled in accordance with legal requirements.

Policy Statement:

  • All evidence, logs, and data associated with the incident must be handled as follows:

    • All evidence, logs, and data associated with the incident must be labeled.

    • All evidence, logs, and data associated with the incident should be placed in tamper-resistant containers, grouped together, and put in a limited access location.

  • All evidence handling must be recorded on a chain of custody.

  • Unless otherwise instructed by legal counsel or law enforcement officials, all internal digital evidence should be handled in accordance with the procedures described in “Electronic Crime Scene Investigation: A Guide for First Responders, Second Edition” from the United States Department of Justice, National Institute of Justice (April 2008). If not possible, deviations must be noted.

  • Unless otherwise instructed by legal counsel or law enforcement officials, subsequent internal forensic investigation and analysis should follow the guidelines provided in “Forensic Examination of Digital Evidence: A Guide for Law Enforcement” from the United States Department of Justice, National Institute of Justice (April 2004). If not possible, deviations must be noted.

  • Executive management and the DIH have the authority to engage outside expertise for forensic evidence handling investigation and analysis.

  • Exceptions to this policy can be authorized only by legal counsel.

Data Breach Notification Requirements

A component of incident management is to understand, evaluate, and be prepared to comply with the legal responsibility to notify affected parties. Most states have some form of data breach notification laws. Federal regulations, including but not limited to the Gramm-Leach-Bliley Act (GLBA), the Health Information Technology for Economic and Clinical Health (HITECH) Act, the Federal Information Security Management Act (FISMA), and the Federal Educational Rights and Privacy Act (FERPA), all address the protection of personally identifiable information (PII; also referred to as nonpublic personal information, or NPPI) and may potentially apply in an event of an incident.

A data breach is widely defined as an incident that results in compromise, unauthorized disclosure, unauthorized acquisition, unauthorized access, or unauthorized use or loss of control of legally protected PII, including the following:

  • Any information that can be used to distinguish or trace an individual’s identity, such as name, SSN, date and place of birth, mother’s maiden name, or biometric records.

  • Any other information that is linked or linkable to an individual, such as medical, educational, financial, and employment information.

  • Information that is standing alone is not generally considered personally identifiable, because many people share the same trait, such as first or last name, country, state, ZIP code, age (without birth date), gender, race, or job position. However, multiple pieces of information, none of which alone may be considered personally identifiable, may uniquely identify a person when brought together.

Incidents resulting in unauthorized access to PII are taken seriously because the information can be used by criminals to make false identification documents (including drivers’ licenses, passports, and insurance certificates), make fraudulent purchases and insurance claims, obtain loans or establish lines of credit, and apply for government and military benefits.

As we will discuss, the laws vary and sometimes even conflict in their requirements regarding the right of the individual to be notified, the manner in which they must be notified, and the information to be provided. What is consistent, however, is that notification requirements apply regardless of whether an organization stores and manages its data directly or through a third party, such as a cloud service provider.

FYI: Equifax Breach

A massive data breach at Equifax in 2017 raised the risk of identity theft for over 145 million U.S. consumers.

If you live in the United States and have a credit report, there’s a good chance that you’re one of those 145+ million American consumers whose sensitive personal information was exposed. Equifax is one of the nation’s three major credit reporting agencies.

According to Equifax, the breach lasted from mid-May through July. Threat actors accessed consumer’s names, social security numbers, birth dates, addresses and, in some instances, driver’s license numbers. They also stole credit card numbers for about 209,000 people and dispute documents with personal identifying information for about 182,000 people. And they grabbed personal information of people in the UK and Canada, too. This breach highlighted the need of public disclosure of breaches.

The VERIS Community Database (VCDB) is an initiative that was launched to catalog security incidents in the public domain. VCDB contains raw data for thousands of security incidents shared under a creative commons license. You can download the latest release, follow the latest changes, and even help catalog and code incidents to grow the database at GitHub at: https://github.com/vz-risk/VCDB.

Is There a Federal Breach Notification Law?

The short answer is, there is not. Consumer information breach notification requirements have historically been determined at the state level. There are, however, federal statutes and regulations that require certain regulated sectors (such as health care, financial, and investment) to protect certain types of personal information, implement information security programs, and provide notification of security breaches. In addition, federal departments and agencies are obligated by memorandum to provide breach notification. The Veterans Administration is the only agency with its own law governing information security and privacy breaches.

GLBA Financial Institution Customer Information

Section 501(b) of the GLBA and FIL-27-2005 Guidance on Response Programs for Unauthorized Access to Customer Information and Customer Notice requires that a financial institution provide a notice to its customers whenever it becomes aware of an incident of unauthorized access to customer information and, at the conclusion of a reasonable investigation, determines that misuse of the information has occurred or it is reasonably possible that misuse will occur.

Customer notice should be given in a clear and conspicuous manner. The notice should include the following items:

  • Description of the incident

  • Type of information subject to unauthorized access

  • Measures taken by the institution to protect customers from further unauthorized access

  • Telephone number that customers can call for information and assistance

  • A reminder to customers to remain vigilant over the next 12 to 24 months and to report suspected identity theft incidents to the institution

The guidance encourages financial institutions to notify the nationwide consumer reporting agencies prior to sending notices to a large number of customers that include contact information for the reporting agencies.

Customer notices are required to be delivered in a manner designed to ensure that a customer can reasonably be expected to receive them. For example, the institution may choose to contact all customers affected by telephone, by mail, or by electronic mail (for those customers for whom it has a valid email address and who have agreed to receive communications electronically).

Financial institutions must notify their primary federal regulator as soon as possible when the institution becomes aware of an incident involving unauthorized access to or use of nonpublic customer information. Consistent with the agencies’ Suspicious Activity Report (SAR) regulations, institutions must file a timely SAR. In situations involving federal criminal violations requiring immediate attention, such as when a reportable violation is ongoing, institutions must promptly notify appropriate law enforcement authorities. Reference Chapter 12, “Business Continuity Management,” for further discussion of financial institution–related security incidents.

HIPAA/HITECH Personal Healthcare Information (PHI)

The HITECH Act requires that covered entities notify affected individuals when they discover that their unsecured PHI has been, or is reasonably believed to have been, breached—even if the breach occurs through or by a business associate. A breach is defined as “impermissible acquisition, access, or use or disclosure of unsecured PHI…unless the covered entity or business associate demonstrates that there is a low probability that the PHI has been compromised.”

The notification must be made without unreasonable delay and no later than 60 days after the discovery of the breach. The covered entity must also provide notice to “prominent media outlets” if the breach affects more than 500 individuals in a state or jurisdiction. The notice must include the following information:

  • A description of the breach, including the date of the breach and date of discovery

  • The type of PHI involved (such as full name, SSN, date of birth, home address, or account number)

  • Steps individuals should take to protect themselves from potential harm resulting from the breach

  • Steps the covered entity is taking to investigate the breach, mitigate losses, and protect against future breaches

  • Contact procedures for individuals to ask questions or receive additional information, including a toll-free telephone number, email address, website, or postal address

Covered entities must notify the Department of Health and Human Services (HHS) of all breaches. Notice to HHS must be provided immediately for breaches involving more than 500 individuals and annually for all other breaches. Covered entities have the burden of demonstrating that they satisfied the specific notice obligations following a breach, or, if notice is not made following an unauthorized use or disclosure, that the unauthorized use or disclosure did not constitute a breach. See Chapter 13, “Regulatory Compliance for Financial Institutions,” for further discussion of health-care–related security incidents.

Section 13407 of the HITECH Act directed the Federal Trade Commission (FTC) to issue breach notification rules pertaining to the exposure or compromise of personal health records (PHRs). A personal health record is defined by the FTC as an electronic record of “identifiable health information on an individual that can be drawn from multiple sources and that is managed, shared, and controlled by or primarily for the individual.” Don’t confuse PHR with PHI. PHI is information that is maintained by a covered entity as defined by HIPAA/HITECH. PHR is information provided by the consumer for the consumer’s own benefit. For example, if a consumer uploads and stores medical information from many sources in one online location, the aggregated data would be considered a PHR. The online service would be considered a PHR vendor.

The FTC rule applies to both vendors of PHRs (which provide online repositories that people can use to keep track of their health information) and entities that offer third-party applications for PHRs. The requirements regarding the scope, timing, and content mirror the requirements imposed on covered entities. The enforcement is the responsibility of the FTC. By law, noncompliance is considered “unfair and deceptive trade practices.”

Federal Agencies

Office of Management and Budget (OMB) Memorandum M-07-16: Safeguarding Against and Responding to the Breach of Personally Identifiable Information requires all federal agencies to implement a breach notification policy to safeguard paper and digital PII. Attachment 3, “External Breach Notification,” identifies the factors agencies should consider in determining when notification outside the agency should be given and the nature of the notification. Notification may not be necessary for encrypted information. Each agency is directed to establish an agency response team. Agencies must assess the likely risk of harm caused by the breach and the level of risk. Agencies should provide notification without unreasonable delay following the detection of a breach, but are permitted to delay notification for law enforcement, national security purposes, or agency needs. Attachment 3 also includes specifics as to the content of the notice, criteria for determining the method of notification, and the types of notice that may be used. Attachment 4, “Rules and Consequences Policy,” states that supervisors may be subject to disciplinary action for failure to take appropriate action upon discovering the breach or failure to take the required steps to prevent a breach from occurring. Consequences may include reprimand, suspension, removal, or other actions in accordance with applicable law and agency policy.

Veterans Administration

On May 3, 2006, a data analyst at Veterans Affairs took home a laptop and an external hard drive containing unencrypted information on 26.5 million people. The computer equipment was stolen in a burglary of the analyst’s home in Montgomery County, Maryland. The burglary was immediately reported to both Maryland police and his supervisors at Veterans Affairs. The theft raised fears of potential mass identity theft. On June 29, the stolen laptop computer and hard drive were turned in by an unidentified person. The incident resulted in Congress imposing specific response, reporting, and breach notification requirements on the Veterans Administration (VA).

Title IX of P.L. 109-461, the Veterans Affairs Information Security Act, requires the VA to implement agency-wide information security procedures to protect the VA’s “sensitive personal information” (SPI) and VA information systems. P.L. 109-461 also requires that in the event of a “data breach” of SPI processed or maintained by the VA, the Secretary must ensure that as soon as possible after discovery, either a non-VA entity or the VA’s Inspector General conduct an independent risk analysis of the data breach to determine the level of risk associated with the data breach for the potential misuse of any SPI. Based on the risk analysis, if the Secretary determines that a reasonable risk exists of the potential misuse of SPI, the Secretary must provide credit protection services.

P.L. 109-461 also requires the VA to include data security requirements in all contracts with private-sector service providers that require access to SPI. All contracts involving access to SPI must include a prohibition of the disclosure of such information, unless the disclosure is lawful and expressly authorized under the contract, as well as the condition that the contractor or subcontractor notify the Secretary of any data breach of such information. In addition, each contract must provide for liquidated damages to be paid by the contractor to the Secretary in the event of a data breach with respect to any SPI, and that money should be made available exclusively for the purpose of providing credit protection services.

State Breach Notification Laws

All 50 states, the District of Columbia, Guam, Puerto Rico, and the Virgin Islands have enacted legislation requiring private or governmental entities to notify individuals of security breaches of information involving personally identifiable information.

  • California was the first to adopt a security breach notification law. The California Security Breach Information Act (California Civil Code Section 1798.82), effective July 1, 2003, required companies based in California or with customers in California to notify them whenever their personal information may have been compromised. This groundbreaking legislation provided the model for states around the country.

  • MA Chapter 93H Massachusetts Security Breach Notification Law, enacted in 2007, and the subsequent 201 CMR 17 Standards for the Protection of Personal Information of Residents of the Commonwealth, is widely regarded as the most comprehensive state information security legislation.

  • The Texas Breach Notification Law was amended in 2011 to require entities doing business within the state to provide notification of data breaches to residents of states that have not enacted their own breach notification law. In 2013, this provision was removed. Additionally, in 2013, an amendment was added that notice provided to consumers in states that require notification can comply with either the Texas law or the law of the state in which the individual resides.

The basic premise of the state security breach laws is that consumers have a right to know if unencrypted personal information such as SSN, driver’s license number, state identification card number, credit or debit card number, account password, PINs, or access codes have either been or are suspected to be compromised. The concern is that the listed information could be used fraudulently to assume or attempt to assume a person’s identity. Exempt from legislation is publicly available information that is lawfully made available to the general public from federal, state, or local government records or by widely distributed media.

State security breach notification laws generally follow the same framework, which includes who must comply, a definition of personal information and breach, the elements of harm that must occur, triggers for notification, exceptions, and the relationship to federal law and penalties and enforcement authorities. Although the framework is standard, the laws are anything but. The divergence begins with the differences in how personal information is defined and who is covered by the law, and ends in aggregate penalties that range from $50,000 to $500,000. The variations are so numerous that compliance is confusing and onerous.

It is strongly recommended that any organization that experiences a breach or suspected breach of PII consult with legal counsel for interpretation and application of the myriad of sector-based, federal, and state incident response and notification laws.

FYI: State Security Breach Notification Laws

All 50 states, the District of Columbia, Guam, Puerto Rico and the Virgin Islands have enacted legislation requiring private or governmental entities to notify individuals of security breaches of information involving personally identifiable information.

The National Conference of State Legislatures maintains a public access library of state security breach notification laws and related legislation at http://www.ncsl.org/research/telecommunications-and-information-technology/security-breach-notification-laws.aspx.

Does Notification Work?

In the previous section, we discussed sector-based, federal, and state breach notification requirements. Notification can be resource-intensive, time-consuming, and expensive. The question that needs to be asked is, is it worth it? The resounding answer from privacy and security advocates, public relations (PR) specialists, and consumers is “yes.” Consumers trust those who collect their personal information to protect it. When that doesn’t happen, they need to know so that they can take steps to protect themselves from identity theft, fraud, and privacy violations.

Experian commissioned the Ponemon Institute to conduct a consumer study on data breach notification. The findings are instructive. When asked “What personal data if lost or stolen would you worry most about?”, they overwhelmingly responded “password/PIN” and “social security number.”

  • Eighty-five percent believe notification about data breach and the loss or theft of their personal information is relevant to them.

  • Fifty-nine percent believe a data breach notification means there is a high probability they will become an identity theft victim.

  • Fifty-eight percent say the organization has an obligation to provide identity protection services, and 55% say they should provide credit-monitoring services.

  • Seventy-two percent were disappointed in the way the notification was handled. A key reason for the disappointment is respondents’ belief that the notification did not increase their understanding about the data breach.

FYI: Security Breach Notification Website

New Hampshire law requires organizations to notify the Office of the Attorney General of any breach that impacts New Hampshire residents. Copies of all notifications are posted on the NH Department of Justice, Office of the Attorney General website at https://www.doj.nh.gov/consumer/security-breaches.

In Practice

Data Breach Reporting and Notification Policy

Synopsis: To ensure compliance with all applicable laws, regulations, and contractual obligations, timely communications with customers, and internal support for the process.

Policy Statement:

  • It is the intent of the company to comply with all information security breach–related laws, regulations, and contractual obligations.

  • Executive management has the authority to engage outside expertise for legal counsel, crisis management, PR, and communications.

  • Affected customer and business partners will be notified as quickly as possible of a suspected or known compromise of personal information. The company will provide regular updates as more information becomes known.

  • Based on applicable laws, legal counsel in collaboration with the CEO will make the determination regarding the scope and content of customer notification.

  • Legal counsel and the marketing/PR department will collaborate on all internal and external notifications and communications. All publications must be authorized by executive management.

  • Customer service must staff appropriately to meet the anticipated demand for additional information.

  • The COO is the official spokesperson for the organization. In his/her absence, the legal counsel will function as the official spokesperson.

The Public Face of a Breach

It’s tempting to keep a data breach secret, but not reasonable. Consumers need to know when their information is at risk so they can respond accordingly. After notification has gone out, rest assured that the media will pick up the story. Breaches attract more attention than other technology-related topics, so reporters are more apt to cover them to drive traffic to their sites. If news organizations learn about these attacks through third-party sources while the breached organization remains silent, the fallout can be significant. Organizations must be proactive in their PR approach, using public messaging to counteract inaccuracies and tell the story from their point of view. Doing this right can save an organization’s reputation and even, in some cases, enhance the perception of its brand in the eyes of customers and the general public. The PR professionals advise following these straightforward but strict rules when addressing the media and the public:

  • Get it over with.

  • Be humble.

  • Don’t lie.

  • Say only what needs to be said.

Don’t wait until a breach happens to develop a PR preparedness plan. Communications should be part of any incident preparedness strategy. Security specialists should work with PR people to identify the worst possible breach scenario so they can message against it and determine audience targets, including customers, partners, employees, and the media. Following a breach, messaging should be bulletproof and consistent.

Training users to use strong passwords, not click on email embedded links, not open unsolicited email attachments, properly identify anyone requesting information, and report suspicious activity can significantly reduce small business exposure and harm.

Summary

An information security incident is an adverse event that threatens business security and/or disrupts operations. Examples include intentional unauthorized access, DDoS attacks, malware, and inappropriate usage. The objective of an information security risk management program is to minimize the number of successful attempts and attacks. The reality is that security incidents happen even at the most security-conscious organizations. Every organization should be prepared to respond to an incident quickly, confidently, and in compliance with applicable laws and regulations.

The objective of incident management is a consistent and effective approach to the identification of and response to information security–related incidents. Meeting that objective requires situational awareness, incident reporting mechanisms, a documented IRP, and an understanding of legal obligations. Incident preparation includes developing strategies and instructions for documentation and evidence handling, detection and investigation (including forensic analysis), containment, eradication and recovery, notification, and closure. The roles and responsibilities of key personnel, including executive management, legal counsel, incident response coordinators (IRCs), designated incident handlers (DIHs), the incident response team (IRT), and ancillary personnel as well as external entities such as law enforcement and regulatory agencies, should be clearly defined and communicated. Incident response capabilities should be practiced and evaluated on an ongoing basis.

Consumers have a right to know if their personal data has been compromised. In most situations, data breaches of PII must be reported to the appropriate authority and affected parties notified. A data breach is generally defined as actual or suspected compromise, unauthorized disclosure, unauthorized acquisition, unauthorized access, or unauthorized use or loss of control of legally protected PII. All 50 states, the District of Columbia, Guam, Puerto Rico and the Virgin Islands have enacted legislation requiring private or governmental entities to notify individuals of security breaches of information involving personally identifiable information. In addition to state laws, there are sector- and agency-specific federal regulations that pertain to reporting and notification. Organizations that experience a breach or suspected breach of PII should consult with legal counsel for interpretation and application of often overlapping and contradictory rules and expectations.

Incident management policies include Incident Definition Policy, Incident Classification Policy, Information Response Program Policy, Incident Response Authority Policy, Evidence Handling and Use Policy, and Data Breach Reporting and Notification Policy. NIST Special Publication 800-61 goes over the major phases of the incident response process in detail. You should become familiar with that publication because it provides additional information that will help you succeed in your security operations center (SOC).

Many organizations take advantage of tabletop (simulated) exercises to further test their capabilities. These tabletop exercises are an opportunity to practice and also perform gap analysis. In addition, these exercises may also allow them to create playbooks for incident response. Developing a playbook framework makes future analysis modular and extensible.

During the investigation and resolution of a security incident, you may also need to communicate with outside parties regarding the incident. Examples include, but are not limited to, contacting law enforcement, fielding media inquiries, seeking external expertise, and working with Internet service providers (ISPs), the vendor of your hardware and software products, threat intelligence vendor feeds, coordination centers, and members of other incident response teams. You can also share relevant incident indicator of compromise (IoC) information and other observables with industry peers. A good example of information-sharing communities includes the Financial Services Information Sharing and Analysis Center (FS-ISAC).

Test Your Skills

Multiple Choice Questions

1. Which of the following statements best defines incident management?

A. Incident management is risk minimization.

B. Incident management is a consistent approach to responding to and resolving issues.

C. Incident management is problem resolution.

D. Incident management is forensic containment.

2. Which of the following statements is true of security-related incidents?

A. Over time, security-related incidents have become less prevalent and less damaging.

B. Over time, security-related incidents have become more prevalent and more disruptive.

C. Over time, security-related incidents have become less prevalent and more damaging.

D. Over time, security-related incidents have become more numerous and less disruptive.

3. Which of the following CVSS score groups represents the intrinsic characteristics of a vulnerability that are constant over time and do not depend on a user-specific environment?

A. Temporal

B. Base

C. Environmental

D. Access vector

4. Which of the following aim to protect their citizens by providing security vulnerability information, security awareness training, best practices, and other information?

A. National CERTs

B. PSIRT

C. ATA

D. Global CERTs

5. Which of the following is the team that handles the investigation, resolution, and disclosure of security vulnerabilities in vendor products and services?

A. CSIRT

B. ICASI

C. USIRP

D. PSIRT

6. Which of the following is an example of a coordination center?

A. PSIRT

B. FIRST

C. The CERT/CC division of the Software Engineering Institute (SEI)

D. USIRP from ICASI

7. Which of the following is the most widely adopted standard to calculate the severity of a given security vulnerability?

A. VSS

B. CVSS

C. VCSS

D. CVSC

8. The CVSS base score defines Exploitability metrics that measure how a vulnerability can be exploited as well as Impact metrics that measure the impact on which of the following? (Choose three.)

A. Repudiation

B. Nonrepudiation

C. Confidentiality

D. Integrity

E. Availability

9. Which of the following is true about cybersecurity incidents?

A. Compromise business security

B. Disrupt operations

C. Impact customer trust

D. All of the above

10. Which of the following statements is true when a cybersecurity-related incident occurs at a business partner or vendor that hosts or processes legally protected data on behalf of an organization?

A. The organization does not need to do anything.

B. The organization must be notified and respond accordingly.

C. The organization is not responsible.

D. The organization must report the incident to local law enforcement.

11. Which of the following can be beneficial to further test incident response capabilities?

A. Phishing

B. Legal exercises

C. Tabletop exercises

D. Capture the flag

12. A celebrity is admitted to the hospital. If an employee accesses the celebrity’s patient record just out of curiosity, the action is referred to as __________.

A. inappropriate usage

B. unauthorized access

C. unacceptable behavior

D. undue care

13. Which of the following is true when employees report cybersecurity incidents?

A. Prepared to respond to the incident

B. Praised for their actions

C. Provided compensation

D. None of the above

14. Which of the following statements is true of an incident response plan?

A. An incident response plan should be updated and authorized annually.

B. An incident response plan should be documented.

C. An incident response plan should be stress-tested.

D. All of the above.

15. Which of the following terms best describes a signal or warning that an incident may occur in the future?

A. A sign

B. A precursor

C. An indicator

D. Forensic evidence

16. Which of the following terms best describes the process of taking steps to prevent the incident from spreading?

A. Detection

B. Containment

C. Eradication

D. Recovery

17. Which of the following terms best describes the addressing of the vulnerabilities related to the exploit or compromise and restoring normal operations?

A. Detection

B. Containment

C. Testing

D. Recovery

18. Which of the following terms best describes the eliminating of the components of the incident?

A. Investigation

B. Containment

C. Eradication

D. Recovery

19. Which of the following terms best describes the substantive or corroborating evidence that an incident may have occurred or may be occurring now?

A. Indicator of compromise

B. Forensic proof

C. Heresy

D. Diligence

20. Which of the following is not generally an incident response team responsibility?

A. Incident impact analysis

B. Incident communications

C. Incident plan auditing

D. Incident management

21. Documentation of the transfer of evidence is known as a ____________.

A. chain of evidence

B. chain of custody

C. chain of command

D. chain of investigation

22. Data breach notification laws pertain to which of the following?

A. Intellectual property

B. Patents

C. PII

D. Products

23. HIPAA/HITECH requires ______________ within 60 days of the discovery of a breach.

A. notification be sent to affected parties

B. notification be sent to law enforcement

C. notification be sent to Department of Health and Human Services

D. notification be sent to all employees

Exercises

Exercise 11.1: Assessing an Incident Report
  1. At your school or workplace, locate information security incident reporting guidelines.

  2. Evaluate the process. Is it easy to report an incident? Are you encouraged to do so?

  3. How would you improve the process?

Exercise 11.2: Evaluating an Incident Response Policy
  1. Locate an incident response policy document either at your school, workplace, or online. Does the policy clearly define the criteria for an incident?

  2. Does the policy define roles and responsibilities? If so, describe the response structure (for example, who is in charge, who should investigate an incident, who can talk to the media). If not, what information is the policy missing?

  3. Does the policy include notification requirements? If yes, what laws are referenced and why? If no, what laws should be referenced?

Exercise 11.3: Researching Containment and Eradication
  1. Research and identify the latest strains of malware.

  2. Choose one. Find instructions for containment and eradication.

  3. Conventional risk management wisdom is that it is better to replace a hard drive than to try to remove malware. Do you agree? Why or why not?

Exercise 11.4: Researching a DDoS Attack
  1. Find a recent news article about DDoS attacks.

  2. Who were the attackers and what was their motivation?

  3. What was the impact of the attack? What should the victim organization do to mitigate future damage?

Exercise 11.5: Understanding Evidence Handling
  1. Create a worksheet that could be used by an investigator to build an incident profile.

  2. Create an evidentiary chain of custody form that could be used in legal proceedings.

  3. Create a log for documenting forensic or computer-based investigation.

Projects

Project 11.1: Creating Incident Awareness
  1. One of the key messages to be delivered in training and awareness programs is the importance of incident reporting. Educating users to recognize and report suspicious behavior is a powerful deterrent to would-be intruders. The organization you work for has classified the following events as high priority, requiring immediate reporting:

    • Customer data at risk of exposure or compromise

    • Unauthorized use of a system for any purpose

    • DoS attack

    • Unauthorized downloads of software, music, or videos

    • Missing equipment

    • Suspicious person in the facility

    You have been tasked with training all users to recognize these types of incidents.

    1. Write a brief explanation of why each of the listed events is considered high priority. Include at least one example per event.

    2. Create a presentation that can be used to train employees to recognize these incidents and how to report them.

    3. Create a 10-question quiz that tests their post-presentation knowledge.

Project 11.2: Assessing Security Breach Notifications

Access the State of New Hampshire, Department of Justice, Office of the Attorney General security breach notification web page. Sort the notifications by year.

  1. Read three recent notification letters to the Attorney General as well as the corresponding notice that will be sent to the consumer (be sure to scroll through the document). Write a summary and timeline (as presented) of each event.

  2. Choose one incident to research. Find corresponding news articles, press releases, and so on.

  3. Compare the customer notification summary and timeline to your research. In your opinion, was the notification adequate? Did it include all pertinent details? What controls should the company put in place to prevent this from happening again?

Project 11.3: Comparing and Contrasting Regulatory Requirements

The objective of this project is to compare and contrast breach notification requirements.

  1. Create a grid that includes state, statute, definition of personal information, definition of a breach, time frame to report a breach, reporting agency, notification requirements, exemptions, and penalties for nonconformance. Fill in the grid using information from five states.

  2. If a company that did business in all five states experienced a data breach, would it be able to use the same notification letter for consumers in all five states? Why or why not?

  3. Create a single notification law using what you believe are the best elements of the five laws included in the grid. Be prepared to defend your choices.

Case Study

An Exercise in Cybercrime Incident Response

A cybercrime incident response exercise is one of the most effective ways to enhance organizational awareness. This cybercrime incident response exercise is designed to mimic a multiday event. Participants are challenged to find the clues, figure out what to do, and work as a team to minimize the impact. Keep the following points in mind:

  • Although fictional, the scenarios used in the exercise are based on actual events.

  • As in the actual events, there may be “unknowns,” and it may be necessary to make some assumptions.

  • The scenario will be presented in a series of situation vignettes.

  • At the end of each day, you will be asked to answer a set of questions. Complete the questions before continuing on.

  • At the end of Day 2, you will be asked to create a report.

This Case Study is designed to be a team project. You will need to work with at least one other member of your class to complete the exercise.

Background

BestBank is proudly celebrating its tenth anniversary year with special events throughout the year. Last year, BestBank embarked on a five-year strategic plan to extend its reach and offer services to municipalities and armed services personnel. Integral to this plan is the acquisition of U.S. Military Bank. The combined entity will be known as USBEST. The new entity will be primarily staffed by BestBank personnel.

USBEST is maintaining U.S. Military Bank’s long-term contract with the Department of Defense to provide financial and insurance services to active-duty and retired military personnel. The primary delivery channel is via a branded website. Active-duty and retired military personnel can access the site directly by going to www.bankformilitary.org. USBEST has also put a link on its home page. The bankformilitary.org website is hosted by HostSecure, a private company located in the Midwest.

USBEST’s first marketing campaign is a “We’re Grateful” promotion, including special military-only certificate of deposit (CD) rates as well as discounted insurance programs.

Cast of Characters:

  • Sam Smith, VP of Marketing

  • Robyn White, Deposit Operations and Online Banking Manager

  • Sue Jones, IT Manager

  • Cindy Hall, Deposit Operations Clerk

  • Joe Bench, COO

Day 1

Wednesday 7:00 A.M.

The marketing campaign begins with posts on Facebook and Twitter as well as emails to all current members of both institutions announcing the acquisition and the “We’re Grateful” promotion. All communications encourage active-duty and retired military personnel to visit the www.bankformilitary.org website.

Wednesday 10:00 A.M.

IT sends an email to Sam Smith, VP of Marketing, reporting that they have been receiving alerts that indicate there is significant web traffic to http://www.bankformilitary.org. Smith is pleased.

Wednesday Late Morning/Early Afternoon

By late morning, the USBEST receptionist starts getting calls about problems accessing the bankformilitary.org site. After lunch, the calls escalate; the callers are angry about something on the website. As per procedure, she informs callers that the appropriate person will call them back as soon as possible and forwards the messages to Sam Smith’s voicemail.

Wednesday 3:45 P.M.

Sam Smith returns to his office and retrieves his voice messages. Smith opens his browser and goes to bankformilitary.org. To his horror, he finds that “We’re Grateful” has been changed to “We’re Hateful” and that “USBEST will be charging military families fees for all services.”

Sam immediately goes to the office of Robyn White, Deposit Operations and Online Banking Manager. Robyn’s department is responsible for online services, including the bankformilitary.org website, and she has administrative access. He is told that Robyn is working remotely and that she has email access. Sam calls Robyn at home but gets her voicemail. He sends her an email asking her to call him ASAP!

Sam then contacts the bank’s IT Manager, Sue Jones. Sue calls HostSecure for help in gaining access to the website. HostSecure is of little assistance. They claim that all they do is host, not manage the site. Sue insists upon talking to “someone in charge.” After being transferred and put on hold numerous times, she speaks with the HostSecure Security Officer, who informs her that upon proper authorization, they can shut down the website. Jones inquires who is on the authorization list. The HostSecure Security Officer informs Sue that it would be a breach of security to provide that information.

Wednesday 4:40 P.M.

Sue Jones locates Robyn White’s cell phone number and calls her to discuss what is happening. Robyn apologizes for not responding quicker to Sam’s mail; she ducked out for her son’s soccer game. Robyn tells Sue that she had received an email early this morning from HostSecure informing her that she needed to update her administrative password to a more secure version. The email had a link to a change password form. She was happy to learn that they were updating their password requirements. Robyn reported that she clicked the link, followed the instructions (which included verifying her current password), and changed her password to a secure, familiar one. She also forwarded the email to Cindy Hall, Deposit Operations Clerk, and asked her to update her password as well. Sue asks Robyn to log in to bankformilitary.org to edit the home page. Robyn complies and logs in with her new credentials. The login screen returns a “bad password” error. She logs in with her old credentials; they do not work either.

Wednesday 4:55 P.M.

Sue Jones calls HostSecure. She is put on hold. After waiting five minutes, she hangs up and calls again. This time she receives a message that regular business hours are 8:00 a.m. to 5:00 p.m. EST. The message indicates that for emergency service, customers should call the number on their service contract. Sue does not have a copy of the contract. She calls the Accounting and Finance Department Manager to see if they have a copy. Everyone in the department is gone for the day.

Wednesday 5:10 P.M.

Sue Jones lets Sam Smith know that she cannot do anything more until the morning. Sam decides to update Facebook and the USBEST website home page with an announcement of what is happening, reassuring the public that the bank is doing everything they can and apologizing profusely.

Day 1 Questions:

  1. What do you suspect is happening or has happened?

  2. What actions (if any) should be taken?

  3. Who should be contacted and what should they be told?

  4. What lessons can be learned from the day’s events?

Day 2

Thursday 7:30 A.M.

Cindy Hall’s first task of the day is to log in to the bankformilitary.org administrative portal to retrieve a report on the previous evening’s transactional activity. She is surprised to see so many Bill Pay transactions. Upon closer inspection, the funds all seem to be going to the same account and they started a few minutes after midnight. She makes an assumption that the retailer must be having a midnight special and wonders what it is. She then opens Outlook and sees Robyn’s forwarded email about changing her password. She proceeds to do so.

Thursday 8:00 A.M.

Customer Service opens at 8:00 a.m. Immediately they begin fielding calls from military personnel reporting fraudulent Bill Pay transactions. The CSR manager calls Cindy Hall in Deposit Operations to report the problem. Cindy accesses the bankformilitary.org administrative portal to get more information but finds she cannot log in. She figures she must have written down her new password incorrectly. She will ask Robyn to reset it when she gets in.

Thursday 8:30 A.M.

Robyn arrives for work and plugs in her laptop.

Thursday 9:00 A.M.

Sam Smith finally gets through to someone at HostSecure who agrees to work with him to remove the offending text. He is very relieved and informs Joe Bench, COO.

Thursday 10:10 A.M.

Sam Smith arrives 10 minutes late to the weekly senior management meeting. He is visibly shaken. He connects his iPad to a video projector and displays on the screen an anonymous blog that is describing the defacement, lists the URL for the bankformilitary.org administrative portal, the username and password for an administrative account, and member account information. He scrolls down to the next blog entry, which includes private internal bank correspondence.

Day 2 Questions:

  1. What do you suspect is happening or has happened?

  2. Who (if anyone) external to the organization should be notified?

  3. What actions should be taken to contain the incident and minimize the impact?

  4. What should be done post-containment?

  5. What lessons can be learned from the day’s events?

Day 2 Report

It is now 11:00 a.m. An emergency meeting of the Board of Directors has been called for 3:30 p.m. You are tasked with preparing a written report for the Board that includes a synopsis of the incident, detailing the response effort up to the time of the meeting, and recommending a timeline of next steps.

Day 2: Presentation to the Board of Directors 3:30 P.M.

  1. Present your written report to the Board of Directors.

  2. Be prepared to discuss next steps.

  3. Be prepared to discuss law enforcement involvement (if applicable).

  4. Be prepared to discuss consumer notification obligations (if applicable).

References

Regulations Cited

“Data Breach Response: A Guide for Business,” Federal Trade Commission, accessed 04/2018, https://www.ftc.gov/tips-advice/business-center/guidance/data-breach-response-guide-business.

“Appendix B to Part 364—Interagency Guidelines Establishing Information Security Standards,” accessed 04/2018, https://www.fdic.gov/regulations/laws/rules/2000-8660.html.

“201 CMR 17.00: Standards for the Protection of Personal Information of Residents of the Commonwealth,” official website of the Office of Consumer Affairs & Business Regulation (OCABR), accessed 04/2018, www.mass.gov/ocabr/docs/idtheft/201cmr1700reg.pdf.

“Family Educational Rights and Privacy Act (FERPA),” official website of the U.S. Department of Education, accessed 04/2018, https://www2.ed.gov/policy/gen/guid/fpco/ferpa/index.html.

“Financial Institution Letter (FIL-27-2005), Final Guidance on Response Programs for Unauthorized Access to Customer Information and Customer Notice,” accessed 04/2018, https://www.fdic.gov/news/news/financial/2005/fil2705.html.

“HIPAA Security Rule,” official website of the Department of Health and Human Services, accessed 04/2018, https://www.hhs.gov/hipaa/for-professionals/security/index.html.

“Office of Management and Budget Memorandum M-07-16 Safeguarding Against and Responding to the Breach of Personally Identifiable Information,” accessed 04/2018, https://www.whitehouse.gov/OMB/memoranda/fy2007/m07-16.pdf.

Other References

“The Vocabulary for Event Recording and Incident Sharing (VERIS)”, accessed 04/2018, http://veriscommunity.net/veris-overview.html.

“Chain of Custody and Evidentiary Issues,” eLaw Exchange, accessed 04/2018, www.elawexchange.com.

“Complying with the FTC’s Health Breach Notification Rule,” FTC Bureau of Consumer Protection, accessed 04/2018, https://www.ftc.gov/tips-advice/business-center/guidance/complying-ftcs-health-breach-notification-rule.

“Student Privacy,” U.S. Department of Education, accessed 04/2018, https://studentprivacy.ed.gov/.

“Forensic Examination of Digital Evidence: A Guide for Law Enforcement,” U.S. Department of Justice, National Institute of Justice, accessed 04/2018, https://www.nij.gov/publications/pages/publication-detail.aspx?ncjnumber=199408.

“VERIS Community Database, accessed 04/2018, http://veriscommunity.net/vcdb.html.

Mandia, Kevin, and Chris Prosise, Incident Response: Investigating Computer Crime, Berkeley, California: Osborne/McGraw-Hill, 2001.

Nolan, Richard, “First Responders Guide to Computer Forensics,” 2005, Carnegie Mellon University Software Engineering Institute.

“United States Computer Emergency Readiness Team,” US-CERT, accessed 04/2018, https://www.us-cert.gov.

“CERT Division of the Software Engineering Institute (SEI),” accessed 04/2018, https://cert.org.

Forum of Incident Response and Security Teams (FIRST), accessed 04/2018, https://first.org.

“The Common Vulnerability Scoring System: Specification Document,” accessed 04/2018, https://first.org/cvss/specification-document.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.19.29.89