Chapter 3. Risk Mitigation Strategies and Controls

This chapter covers the following topics:

This chapter covers CAS-003 objective 1.3.

Security professionals must help the organizations they work for to put in place the proper risk mitigation strategies and controls. Security professionals should use a risk management framework to ensure that risks are properly identified and the appropriate controls are put into place. This chapter covers all the tasks involved in risk mitigation, including the following:

  • Categorize data types by impact levels based on CIA.

  • Incorporate stakeholder input into CIA impact-level decisions.

  • Determine the aggregate CIA score.

  • Determine the minimum required security controls based on aggregate score.

  • Select and implement controls based on CIA requirements and organizational policies.

  • Extreme scenario planning/worst-case scenario.

  • Conduct system-specific risk analysis.

  • Make risk determination based upon known metrics.

  • Translate technical risks into business terms.

  • Recommend which strategy should be applied based on risk appetite.

This chapter also covers the risk management processes, continuous improvement and monitoring, business continuity planning, and IT governance.

Categorize Data Types by Impact Levels Based on CIA

The three fundamentals of security are confidentiality, integrity, and availability (CIA). Most security issues result in a violation of at least one facet of the CIA triad. Understanding these three security principles will help security professionals ensure that the security controls and mechanisms implemented protect at least one of these principles.

To ensure confidentiality, you must prevent the disclosure of data or information to unauthorized entities. As part of confidentiality, the sensitivity level of data must be determined before any access controls are put in place. Data with a higher sensitivity level will have more access controls in place than data with a lower sensitivity level. The opposite of confidentiality is disclosure. Most security professionals consider confidentiality as it relates to data on a network or devices. However, data can also exist in printed format. Appropriate controls should be put into place to protect data on a network, but data in its printed format needs to be protected, too, which involves implementing data disposal policies. Examples of controls that improve confidentiality include encryption, steganography, access control lists (ACLs), and data classifications.

Integrity, the second part of the CIA triad, ensures that data is protected from unauthorized modification or data corruption. The goal of integrity is to preserve the consistency of data. The opposite of integrity is corruption. Many individuals do not consider data integrity to be as important as data confidentiality. However, data modification or corruption can often be just as detrimental to an enterprise because the original data is lost. Examples of controls that improve integrity include digital signatures, checksums, and hashes.

Finally, availability means ensuring that data is accessible when and where it is needed. Only individuals who need access to data should be allowed access to that data. Availability is the opposite of destruction or isolation. While many consider this tenet to be the least important of the three, an availability failure will affect end users and customers the most. Think of a denial-of-service (DoS) attack against a customer-facing web server. Examples of controls that improve availability include load balancing, hot sites, and RAID. DoS attacks affect availability.

Every security control that is put into place by an organization fulfills at least one of the security principles of the CIA triad. Understanding how to circumvent these security principles is just as important as understanding how to provide them.

A balanced security approach should be implemented to ensure that all three facets are considered when security controls are implemented. When implementing any control, you should identify the facet that the control addresses. For example, RAID addresses data availability, file hashes address data integrity, and encryption addresses data confidentiality. A balanced approach ensures that no facet of the CIA triad is ignored.

Federal Information Processing Standard Publication 199 (FIPS 199) defines standards for security categorization of federal information systems. This U.S. government standard establishes security categories of information systems used by the federal government.

FIPS 199 requires federal agencies to assess their information systems in each of the categories confidentiality, integrity and availability, rating each system as low, moderate, or high impact in each category. An information system’s overall security category is the highest rating from any category.

A potential impact is low if the loss of any tenet of CIA could be expected to have a limited adverse effect on organizational operations, organizational assets, or individuals. This occurs if the organization is able to perform its primary function but not as effectively as normal. This category involves only minor damage, financial loss, or harm.

A potential impact is moderate if the loss of any tenet of CIA could be expected to have a serious adverse effect on organizational operations, organizational assets, or individuals. This occurs if the effectiveness with which the organization is able to perform its primary function is significantly reduced. This category involves significant damage, financial loss, or harm.

A potential impact is high if the loss of any tenet of CIA could be expected to have a severe or catastrophic adverse effect on organizational operations, organizational assets, or individuals. This occurs if an organization is not able to perform one or more of its primary functions. This category involves major damage, financial loss, or severe harm.

FIPS 199 provides a helpful chart that ranks the levels of CIA for information assets, as shown in Table 3-1.

Images

Table 3-1 Confidentiality, Integrity, and Availability Potential Impact Definitions

CIA Tenet

Low

Moderate

High

Confidentiality

Unauthorized disclosure will have limited adverse effect on the organization.

Unauthorized disclosure will have serious adverse effect on the organization.

Unauthorized disclosure will have severe adverse effect on the organization.

Integrity

Unauthorized modification will have limited adverse effect on the organization.

Unauthorized modification will have serious adverse effect on the organization.

Unauthorized modification will have severe adverse effect on the organization.

Availability

Unavailability will have limited adverse effect on the organization.

Unavailability will have serious adverse effect on the organization.

Unavailability will have severe adverse effect on the organization.

It is also important that security professionals and organizations understand the information classification and life cycle. Classification varies depending on whether the organization is a commercial business or a military/government entity.

Incorporate Stakeholder Input into CIA Impact-Level Decisions

Often security professionals alone cannot best determine the CIA levels for enterprise information assets. Security professionals should consult with the asset stakeholders to gain their input on which level should be assigned to each tenet for an information asset. Keep in mind, however, that all stakeholders should be consulted. For example, while department heads should be consulted and have the biggest influence on the CIA decisions about departmental assets, other stakeholders within the department and organization should be consulted as well.

This rule holds for any security project that an enterprise undertakes. Stakeholder input should be critical at the start of the project to ensure that stakeholder needs are documented and to gain stakeholder project buy-in. Later, if problems arise with the security project and changes must be made, the project team should discuss the potential changes with the project stakeholders before any project changes are approved or implemented.

Any feedback should be recorded and should be combined with the security professional assessment to help determine the CIA levels.

Determine the Aggregate CIA Score

According to Table 3-1, FIPS 199 defines three impacts (low, moderate, and high) for the three security tenets. But the levels that are assigned to organizational entities must be defined by the organization because only the organization can determine whether a particular loss is limited, serious, or severe.

Images

According to FIPS 199, the security category (SC) of an identified entity expresses the three tenets with their values for an organizational entity. The values are then used to determine which security controls should be implemented. If a particular asset is made up of multiple entities, then you must calculate the SC for that asset based on the entities that make it up. FIPS 199 provides a nomenclature for expressing these values, as shown here:

SCinformation type = {(confidentiality, impact), (integrity, impact), (availability, impact)}

Let’s look at an example of this nomenclature in a real-world example:

SCpublic site = {(confidentiality, low), (integrity, moderate), (availability, high)}

SCpartner site = {(confidentiality, moderate), (integrity, high), (availability, moderate)}

SCinternal site = {(confidentiality, high), (integrity, medium), (availability, moderate)}

Now let’s assume that all of the sites reside on the same web server. To determine the nomenclature for the web server, you need to use the highest values of each of the categories:

SCweb server = {(confidentiality, high), (integrity, high), (availability, high)}

Some organizations may decide to place the public site on a web server and isolate the partner site and internal site on another web server. In this case, the public web server would not need all of the same security controls and would be cheaper to implement than the partner/internal web server.

For the CASP exam, this FIPS 199 nomenclature is referred to as the aggregate CIA score.

Determine Minimum Required Security Controls Based on Aggregate Score

The appropriate security controls must be implemented for all organizational assets. The security controls that should be implemented are determined based on the aggregate CIA score discussed earlier in this chapter.

It is vital that security professionals understand the types of coverage that are provided by the different security controls that can be implemented. As analysis occurs, security professionals should identify a minimum set of security controls that must be implemented.

Select and Implement Controls Based on CIA Requirements and Organizational Policies

Security professionals must ensure that the appropriate controls are selected and implemented for organizational assets to be protected. The controls that are selected and implemented should be based on the CIA requirements and the policies implemented by the organization. After implementing controls, it may also be necessary to perform a gap analysis to determine where security gaps still exist so that other needed security controls can be implemented.

Security professionals should be familiar with the categories and types of access controls that can be implemented.

Access Control Categories

You implement access controls as a countermeasure to identified vulnerabilities. Access control mechanisms that you can use are divided into seven main categories:

  • Compensative

  • Corrective

  • Detective

  • Deterrent

  • Directive

  • Preventive

  • Recovery

Any access control you implement will fit into one or more access control categories.

Compensative

Compensative controls are in place to substitute for a primary access control and mainly help mitigate risks. By using compensative controls, you can reduce risk to a more manageable level. Examples of compensative controls include requiring two authorized signatures to release sensitive or confidential information and requiring two keys owned by different personnel to open a safe deposit box.

Corrective

Corrective controls are in place to reduce the effect of an attack or another undesirable event. You can use corrective controls to fix or restore the entity that is attacked. Examples of corrective controls include installing fire extinguishers, isolating or terminating a connection, implementing new firewall rules, and using server images to restore to a previous state. Corrective controls are useful after an event has occurred.

Detective

Detective controls are in place to detect an attack while it is occurring to alert appropriate personnel. Examples of detective controls include motion detectors, intrusion detection systems (IDSs), logs, guards, investigations, auditing, and job rotation. Detective controls are useful during an event.

Deterrent

Deterrent controls deter or discourage an attacker. Via deterrent controls, attacks can be discovered early in the process. Deterrent controls often trigger preventive and corrective controls. Examples of deterrent controls include user identification and authentication, fences, lighting, and organizational security policies, such as non-disclosure agreements (NDAs).

Directive

Directive controls specify acceptable practice within an organization. They are in place to formalize an organization’s security directive, mainly to its employees. The most popular directive control is an acceptable use policy (AUP), which lists proper procedures and behaviors that personnel must follow (and often examples of improper procedures). Any organizational security policies or procedures usually fall into this access control category. You should keep in mind that directive controls are efficient only if there is a stated consequence for not following the organization’s directions.

Preventive

Preventive controls prevent an attack from occurring. Examples of preventive controls include locks, badges, biometric systems, encryption, intrusion prevention systems (IPSs), antivirus software, personnel security, security guards, passwords, and security awareness training. Preventive controls are useful before an event occurs.

Recovery

Recovery controls recover a system or device after an attack has occurred. The primary goal of recovery controls is restoring resources. Examples of recovery controls include disaster recovery plans, data backups, and offsite facilities.

Access Control Types

Access control types are divided based on their method of implementation. There are three types of access controls:

In any organization where defense in depth is a priority, access control requires the use of all three types of access controls. Even if you implement the strictest physical and administrative controls, you cannot fully protect the environment without logical controls.

Administrative (Management) Controls

Administrative, or management, controls are implemented to administer the organization’s assets and personnel and include security policies, procedures, standards, baselines, and guidelines that are established by management. These controls are commonly referred to as soft controls. Specific examples are personnel controls, data classification, data labeling, security awareness training, and supervision.

Security awareness training is a very important administrative control. Its purpose is to improve the organization’s attitude about safeguarding data. The benefits of security awareness training include reduction in the number and severity of errors and omissions, better understanding of information value, and better administrator recognition of unauthorized intrusion attempts. A cost-effective way to ensure that employees take security awareness seriously is to create an award or recognition program.

Table 3-2 lists many administrative controls and shows the access control categories into which the controls fit.

Images

Table 3-2 Administrative (Management) Controls

Administrative Controls

Compensative

Corrective

Detective

Deterrent

Directive

Preventive

Recovery

Personnel procedures

 

 

 

 

 

×

 

Security policies

 

 

 

×

×

×

 

Monitoring

 

 

×

 

 

 

 

Separation of duties

 

 

 

 

 

×

 

Job rotation

×

 

×

 

 

 

 

Information classification

 

 

 

 

 

×

 

Security awareness training

 

 

 

 

 

×

 

Investigations

 

 

×

 

 

 

 

Disaster recovery plan

 

 

 

 

 

×

×

Security reviews

 

 

×

 

 

 

 

Background checks

 

 

×

 

 

 

 

Termination

 

×

 

 

 

 

 

Supervision

×

 

 

 

 

 

 

Logical (Technical) Controls

Logical, or technical, controls are software or hardware components used to restrict access. Specific examples of logical controls are firewalls, IDSs, IPSs, encryption, authentication systems, protocols, auditing and monitoring, biometrics, smart cards, and passwords.

An example of implementing a technical control is adopting a new security policy that forbids employees from remotely configuring the email server from a third party’s location during work hours.

Although auditing and monitoring are logical controls and are often listed together, they are actually two different controls. Auditing is a one-time or periodic event to evaluate security. Monitoring is an ongoing activity that examines either the system or users.

Table 3-3 lists many logical controls and shows the access control categories into which the controls fit.

Images

Table 3-3 Logical (Technical) Controls

Logical (Technical) Controls

Compensative

Corrective

Detective

Deterrent

Directive

Preventive

Recovery

Passwords

 

 

 

 

 

×

 

Biometrics

 

 

 

 

 

×

 

Smart cards

 

 

 

 

 

×

 

Encryption

 

 

 

 

 

×

 

Protocols

 

 

 

 

 

×

 

Firewalls

 

 

 

 

 

×

 

IDSs

 

 

×

 

 

 

 

IPSs

 

 

 

 

 

×

 

Access control lists

 

 

 

 

 

×

 

Routers

 

 

 

 

 

×

 

Auditing

 

 

×

 

 

 

 

Monitoring

 

 

×

 

 

 

 

Data backups

 

 

 

 

 

 

×

Antivirus software

 

 

 

 

 

×

 

Configuration standards

 

 

 

 

×

 

 

Warning banners

 

 

 

×

 

 

 

Connection isolation and termination

 

×

 

 

 

 

 

Physical Controls

Physical controls are implemented to protect an organization’s facilities and personnel. Personnel concerns should take priority over all other concerns. Specific examples of physical controls include perimeter security, badges, swipe cards, guards, dogs, mantraps, biometrics, and cabling.

Table 3-4 lists many physical controls and shows the access control categories into which the controls fit.

Images

Table 3-4 Physical Controls

Physical Controls

Compensative

Corrective

Detective

Deterrent

Directive

Preventive

Recovery

Fencing

 

 

 

×

 

×

 

Locks

 

 

 

 

 

×

 

Guards

 

 

×

 

 

×

 

Fire extinguishers

 

×

 

 

 

 

 

Badges

 

 

 

 

 

×

 

Swipe cards

 

 

 

 

 

×

 

Dogs

 

 

×

 

 

×

 

Mantraps

 

 

 

 

 

×

 

Biometrics

 

 

 

 

 

×

 

Lighting

 

 

 

×

 

 

 

Motion detectors

 

 

×

 

 

 

 

CCTV

×

 

×

 

 

 

 

Data backups

 

 

 

 

 

 

×

Antivirus software

 

 

 

 

 

×

 

Configuration standards

 

 

 

 

×

 

 

Warning banners

 

 

 

×

 

 

 

Hot, warm, and cold sites

 

 

 

 

 

 

×

Security Requirements Traceability Matrix (SRTM)

A security requirements traceability matrix (SRTM) is a grid that displays what is required for an asset’s security. SRTMs are necessary in technical projects that call for security to be included. Using such a matrix is an effective way for a user to ensure that all work is being completed.

Table 3-5 is an example of an SRTM for a new interface. Keep in mind that an organization may customize an SRTM to fit its needs.

Table 3-5 SRTM Example

ID Number

Description

Source

Test Objectives

Verification Method

BMD-1

Ensure that data in the TETRA database is secured through the interface

Functional design team

Test encryption method used

Determined by security analyst

BMD-2

Accept requests only from known staff, applications, and IP addresses

Functional design team

Test from unknown users, applications, and IP addresses

Determined by security analyst

BMD-3

Encrypt all data between the TETRA database and corporate database

Functional design team

Test encryption method used

Determined by security analyst and database administrator

Security Control Frameworks

Many organizations have developed security management frameworks and methodologies to help guide security professionals. These frameworks and methodologies include security program development standards, enterprise and security architect development frameworks, security controls, development methods, corporate governance methods, and process management methods. Frameworks, standards, and methodologies are often discussed together because they are related. Standards are accepted as best practices, whereas frameworks are practices that are generally employed. Standards are specific, while frameworks are general. Methodologies are a system of practices, techniques, procedures, and rules used by those who work in a discipline. In this section we cover all three as they relate to security controls.

This section discusses the following frameworks and methodologies and explains where they are used:

  • ISO/IEC 27000 Series

  • Zachman Framework™

  • TOGAF

  • DoDAF

  • MODAF

  • SABSA

  • COBIT

  • NIST

  • HITRUST CSF

  • CIS Critical Security Controls

  • COSO

  • OCTAVE

  • ITIL

  • Six Sigma

  • CMMI

  • CRAMM

Note

Organizations should select the framework, standard, and/or methodology that represents the organization in the most useful manner, based on the needs of the stakeholders.

ISO/IEC 27000 Series

While technically not a framework, ISO 27000 is a security program development standard on how to develop and maintain an information security management system (ISMS).

The 27000 Series includes a list of standards, each of which addresses a particular aspect of ISMS. These standards are either published or in development. The following standards are included as part of the ISO/IEC 27000 Series at the time of this writing:

  • 27000:2016: Published overview of ISMS and vocabulary

  • 27001:2013: Published ISMS requirements

  • 27002:2013: Published code of practice for information security controls

  • 27003:2017: Published guidance on the requirements for an ISMS

  • 27004:2016: Published ISMS monitoring, measurement, analysis, and evaluation guidelines

  • 27005:2011: Published information security risk management guidelines

  • 27006:2015: Published requirements for bodies providing audit and certification of ISMS

  • 27007:2017: Published ISMS auditing guidelines

  • 27008:2011: Published auditor of ISMS guidelines

  • 27009:2016: Published sector-specific application of ISO/IEC 27001 guidelines

  • 27010:2015: Published information security management for inter-sector and inter-organizational communications guidelines

  • 27011:2016: Published telecommunications organization information security management guidelines

  • 27013:2015: Published integrated implementation of ISO/IEC 27001 and ISO/IEC 20000-1 guidance

  • 27014:2013: Published information security governance guidelines

  • 27016:2014: Published ISMS organizational economics guidelines

  • 27017:2015: Published computing services information security control guidelines based on ISO/IEC 27002

  • 27018:2014: Published code of practice for protection of personally identifiable information (PII) in public clouds acting as PII processors

  • 27019:2017: Published information security controls for the energy utility industry guidelines

  • 27021:2017: Published competence requirements for information security management systems professionals

  • 27023:2015: Published mapping the revised editions of ISO/IEC 27001 and ISO/IEC 27002

  • 27031:2011: Published information and communication technology readiness for business continuity guidelines

  • 27032:2012: Published cybersecurity guidelines

  • 27033-1:2015: Published network security overview and concepts

  • 27033-2:2012: Published network security design and implementation guidelines

  • 27033-3:2010: Published network security threats, design techniques, and control issues guidelines

  • 27033-4:2014: Published securing communications between networks using security gateways

  • 27033-5:2013: Published securing communications across networks using virtual private networks (VPNs)

  • 27033-6:2016: In-development document on securing wireless IP network access

  • 27034-1:2011: Published application security overview and concepts

  • 27034-2:2015: Published application security organization normative framework guidelines

  • 27034-5:2017: Published application security protocols and controls data structure guidelines

  • 27034-6:2016: Published case studies for application security

  • 27035-1:2016: Published information security incident management principles

  • 27035-2:2016: Published information security incident response readiness guidelines

  • 27036-1:2014: Published information security for supplier relationships overview and concepts

  • 27036-2:2014: Published information security for supplier relationships common requirements guidelines

  • 27036-3:2013: Published information and communication technology (ICT) supply chain security guidelines

  • 27036-4:2016: Published guidelines for security of cloud services

  • 27037:2012: Published digital evidence identification, collection, acquisition, and preservation guidelines

  • 27038:2014: Published information security digital redaction specification

  • 27039:2015: Published IDS selection, deployment, and operations guidelines

  • 27040:2015: Published storage security guidelines

  • 27041:2015: Published guidance on assuring suitability and adequacy of incident investigative method

  • 27042:2015: Published digital evidence analysis and interpretation guidelines

  • 27043:2015: Published incident investigation principles and processes

  • 27050-1:2016: Published electronic discovery (eDiscovery) overview and concepts

  • 27050-3:2017: Published code of practice for electronic discovery

  • 27799:2016: Published information security in health organizations guidelines

These standards are developed by the ISO/IEC bodies, but certification or conformity assessment is provided by third parties.

Note

The number after the colon for each standard stands for the year that the standard was published. You can find more information regarding ISO standards at www.iso.org. All ISO standards are copyrighted and must be purchased to obtain the detailed information that appears in the standards.

Zachman Framework™

The Zachman Framework™, an enterprise architecture framework, is a two-dimensional classification system based on six communication questions (What? Where? When? Why? Who? and How?) that intersect with different perspectives (Executive, Business Management, Architect, Engineer, Technician, and Enterprise). This system allows analysis of an organization to be presented to different groups in the organization in ways that relate to the groups’ responsibilities. Although this framework is not security oriented, using it helps you relay information for personnel in the language and format that are most useful to them.

The Open Group Architecture Framework (TOGAF)

TOGAF, another enterprise architecture framework, helps organizations design, plan, implement, and govern an enterprise information architecture. TOGAF is based on four interrelated domains: technology, applications, data, and business.

Department of Defense Architecture Framework (DoDAF)

DoDAF is an architecture framework that organizes a set of products under eight views: all viewpoint (required) (AV), capability viewpoint (CV), data and information viewpoint (DIV), operation viewpoint (OV), project viewpoint (PV), services viewpoint (SvcV), standards viewpoint (STDV), and systems viewpoint (SV). It is used to ensure that new DoD technologies integrate properly with the current infrastructures.

British Ministry of Defence Architecture Framework (MODAF)

MODAF is an architecture framework that divides information into seven viewpoints: strategic viewpoint (StV), operational viewpoint (OV), service-oriented viewpoint (SOV), systems viewpoint (SV), acquisition viewpoint (AcV), technical viewpoint (TV), and all viewpoint (AV).

Sherwood Applied Business Security Architecture (SABSA)

SABSA is an enterprise security architecture framework that is similar to the Zachman Framework™. It uses the six communication questions (What? Where? When? Why? Who? and How?) that intersect with six layers (operational, component, physical, logical, conceptual, and contextual). It is a risk-driven architecture. See Table 3-6.

Table 3-6 SABSA Framework Matrix

Viewpoint

Layer

Assets (What)

Motivation (Why)

Process (How)

People (Who)

Location (Where)

Time (When)

Business

Contextual

Business

Risk model

Process model

Organizations and relationships

Geography

Time dependencies

Architect

Conceptual

Business attributes profile

Control objectives

Security strategies and architectural layering

Security entity model and trust framework

Security domain model

Security-related lifetimes and deadlines

Designer

Logical

Business information model

Security policies

Security services

Entity schema and privilege profiles

Security domain definitions and associations

Security processing cycle

Builder

Physical

Business data model

Security rules, practices, and procedures

Security mechanism

Users, applications, and interfaces

Platform and network infrastructure

Control structure execution

Tradesman

Component

Detailed data structures

Security standards

Security tools and products

Identities, functions, actions, and ACLs

Processes, nodes, addresses, and protocols

Security step timing and sequencing

Facilities Manager

Operational

Operational continuity assurance

Operation risk management

Security service management and support

Application and user management and support

Site, network, and platform security

Security operations schedule

Control Objectives for Information and Related Technology (COBIT)

COBIT is a security controls development framework that documents five principles:

  • Meeting stakeholder needs

  • Covering the enterprise end-to-end

  • Applying a single integrated framework

  • Enabling a holistic approach

  • Separating governance from management

These five principles drive control objectives categorized into seven enablers:

  • Principles, policies, and frameworks

  • Processes

  • Organizational structures

  • Culture, ethics, and behavior

  • Information

  • Services, infrastructure, and applications

  • People, skills, and competencies

It also covers the 37 governance and management processes that are needed for enterprise IT.

National Institute of Standards and Technology (NIST) Special Publication (SP) 800 Series

The NIST 800 series is a set of documents that describe U.S. federal government computer security policies, procedures, and guidelines. While NIST publications are written to provide guidance to U.S. government agencies, other organizations can and often do use them. Each SP within the series defines a specific area. Some of the publications included as part of the NIST 800 Series at the time of this writing are:

  • SP 800-12 Rev. 1: Introduces information security principles

  • SP 800-16 Rev. 1: Describes information technology/cybersecurity role-based training for federal departments, agencies, and organizations

  • SP 800-18 Rev. 1: Provides guidelines for developing security plans for federal information systems

  • SP 800-30 Rev. 1: Provides guidance for conducting risk assessments of federal information systems and organizations, amplifying the guidance in SP 800-39

  • SP 800-34 Rev. 1: Provides guidelines on the purpose, process, and format of information system contingency planning development

  • SP 800-35: Provides assistance with selecting, implementing, and managing IT security services through the IT security services life cycle

  • SP 800-36: Provides guidelines for choosing IT security products

  • SP 800-37 Rev. 1: Provides guidelines for applying the risk management framework to federal information systems (Rev. 2 pending)

  • SP 800-39: Provides guidance for an integrated, organization-wide program for managing information security risk

  • SP 800-50: Identifies the four critical steps in the IT security awareness and training life cycle: (1) awareness and training program design; (2) awareness and training material development; (3) program implementation; and (4) post-implementation (companion publication to NIST SP 800-16)

  • SP 800-53 Rev. 4: Provides a catalog of security and privacy controls for federal information systems and a process for selecting controls (Rev. 5 pending)

  • SP 800-53A Rev. 4: Provides a set of procedures for conducting assessments of security controls and privacy controls employed within federal information systems

  • SP 800-55 Rev. 1: Provides guidance on how to use metrics to determine the adequacy of in-place security controls, policies, and procedures

  • SP 800-60 Vol. 1 Rev. 1: Provides guidelines for mapping types of information and information systems to security categories

  • SP 800-61 Rev. 2: Provides guidelines for incident handling

  • SP 800-82 Rev. 2: Provides guidance on how to secure Industrial Control Systems (ICS), including Supervisory Control and Data Acquisition (SCADA) systems, Distributed Control Systems (DCS), and other control system configurations, such as Programmable Logic Controllers (PLC)

  • SP 800-84: Provides guidance on designing, developing, conducting, and evaluating test, training, and exercise (TT&E) events

  • SP 800-86: Provides guidelines for integrating forensic techniques into incident response

  • SP 800-88 Rev. 1: Provides guidelines for media sanitization

  • SP 800-92: Provides guidelines for computer security log management

  • SP 800-101 Rev. 1: Provides guidelines on mobile device forensics

  • SP 800-115: Provides guidelines for information security testing and assessment

  • SP 800-122: Provides guidelines for protecting the confidentiality of PII

  • SP 800-123: Provides guidelines for general server security

  • SP 800-124 Rev. 1: Provides guidelines for securing mobile devices

  • SP 800-137: Provides guidelines in the development of a continuous monitoring strategy and program

  • SP 800-144: Identifies security and privacy challenges pertinent to public cloud computing and security considerations

  • SP 800-145: Provides the NIST definition of cloud computing

  • SP 800-146: Describes cloud computing benefits and issues, presents an overview of major classes of cloud technology, and provides guidelines on how organizations should consider cloud computing

  • SP 800-150: Provides guidelines for establishing and participating in cyber threat information sharing relationships

  • SP 800-153: Provides guidelines for securing wireless local area networks (WLANs)

  • SP 800-154 (Draft): Provides guidelines on data-centric system threat modeling

  • SP 800-160: Provides guidelines on system security engineering

  • SP 800-161: Provides guidance to federal agencies on identifying, assessing, and mitigating information and communication technology (ICT) supply chain risks at all levels of their organizations

  • SP 800-162: Defines attribute-based access control (ABAC) and its considerations

  • SP 800-163: Provides guidelines on vetting the security of mobile applications

  • SP 800-164: Provides guidelines on hardware-rooted security in mobile devices

  • SP 800-167: Provides guidelines on application whitelisting

  • SP 800-175A and B: Provides guidelines for using cryptographic standards in the federal government

  • SP 800-181: Describes the National Initiative for Cybersecurity Education (NICE) Cybersecurity Workforce Framework (NICE Framework)

  • SP 800-183: Describes the Internet of Things (IoT)

Note

For many of the SPs in the list above, you simply need to know that the SP exists. For others, you need to understand details about the SP. Some NIST SPs are covered in more detail in this chapter and in other chapters. Refer to the index in this book to find information on the SPs that are covered in more detail.

HITRUST CSF

HITRUST is a privately held U.S. company that works with healthcare, technology, and information security leaders to establish the Common Security Framework (CSF), which can be used by all organizations that create, access, store, or exchange sensitive and/or regulated data. It was written to address the requirements of multiple regulations and standards. Version 9 was released in September 2017. It is primarily used in the healthcare industry.

This framework has 14 control categories:

  • 0.0: Information Security Management Program

  • 1.0: Access Control

  • 2.0: Human Resources Security

  • 3.0: Risk Management

  • 4.0: Security Policy

  • 5.0: Organization of Information Security

  • 6.0: Compliance

  • 7.0: Asset Management

  • 8.0: Physical and Environmental Security

  • 9.0: Communications and Operations Management

  • 10.0: Information Systems Acquisition, Development, and Maintenance

  • 11.0: Information Security Incident Management

  • 12.0: Business Continuity Management

  • 13.0: Privacy Practices

Within each control category, objectives are defined and levels are assigned based on compliance with documented control standards.

CIS Critical Security Controls

The Center for Internet Security (CIS) released Critical Security Controls version 6.1, which lists 20 CIS controls. The first 5 controls eliminate the vast majority of an organization’s vulnerabilities. Implementing all 20 controls will secure an entire organization against today’s most pervasive threats. These are the 20 controls:

  • Inventory of authorized and unauthorized devices

  • Inventory of authorized and unauthorized software

  • Secure configurations for hardware and software on mobile devices, laptops, workstations, and servers

  • Continuous vulnerability assessment and remediation

  • Controlled usage of administrative privileges

  • Maintenance, monitoring, and analysis of audit logs

  • Email and web browser protections

  • Malware defenses

  • Limitation and control of network ports, protocols, and services

  • Data recovery capability

  • Secure configurations for network devices such as firewalls, routers, and switches

  • Boundary defense

  • Data protection

  • Controlled access based on the need to know

  • Wireless access control

  • Account monitoring and control

  • Security skills assessment and appropriate training to fill the gaps

  • Application software security

  • Incident response and management

  • Penetration tests and red team exercises

The CIS provides a mapping of the Critical Security Controls to known standards, frameworks, laws, and regulations. To read more about this, go to https://www.cisecurity.org/controls/.

Committee of Sponsoring Organizations (COSO) of the Treadway Commission Framework

COSO is a corporate governance framework that consists of five interrelated components: control environment, risk assessment, control activities, information and communication, and monitoring activities. COBIT was derived from the COSO framework. Whereas COBIT is for IT governance, COSO is for corporate governance.

Operationally Critical Threat, Asset and Vulnerability Evaluation (OCTAVE)

OCTAVE, which was developed by Carnegie Mellon University’s Software Engineering Institute, provides a suite of tools, techniques, and methods for risk-based information security strategic assessment and planning. Using OCTAVE, an organization implements small teams across business units and IT to work together to address the organization’s security needs. Figure 3-1 shows the phases and processes of OCTAVE Allegro, the most recent version of OCTAVE.

The phases and processes of OCTAVE Allegro are shown.

Figure 3-1 OCTAVE Allegro Phases and Processes

Information Technology Infrastructure Library (ITIL)

ITIL is a process management development standard developed by the Office of Management and Budget in OMB Circular A-130. ITIL has five core publications: ITIL Service Strategy, ITIL Service Design, ITIL Service Transition, ITIL Service Operation, and ITIL Continual Service Improvement. These five core publications contain 26 processes. Although ITIL has a security component, it is primarily concerned with managing the service-level agreements (SLAs) between an IT department or organization and its customers. As part of the OMB Circular A-130, an independent review of security controls should be performed every three years.

Table 3-7 lists the five ITIL version 3 core publications and the 26 processes within them.

Table 3-7 ITIL v3 Core Publications and Processes

ITIL Service Strategy

ITIL Service Design

ITIL Service Transition

ITIL Service Operation

ITIL Continual Service Improvement

Strategy Management

Design Coordination

Transition Planning and Support

Event Management

Continual Service Improvement

Service Portfolio Management

Service Catalogue

Change Management

Incident Management

 

Financial Management for IT Services

Service Level Management

Service Asset and Configuration Management

Request Fulfillment

 

Demand Management

Availability Management

Release and Deployment Management

Problem Management

 

Business Relationship Management

Capacity Management

Service Validation and Testing

Access Management

 

 

IT Service Continuity Management

Change Evaluation

 

 

 

Information Security Management System

Knowledge Management

 

 

 

Supplier Management

 

 

 

Six Sigma

Six Sigma is a process improvement standard that includes two project methodologies that were inspired by Deming’s Plan–Do–Check–Act cycle. The two Six Sigma project methodologies are:

  • DMAIC: Define, Measure, Analyze, Improve, and Control (see Figure 3-2)

  • DMADV: Define, Measure, Analyze, Design, and Verify (see Figure 3-3)

Six Sigma was designed to identify and remove defects in the manufacturing process but can be applied to many business functions, including security.

Six Sigma DMAIC shows a cycle of five processes involving Define, Measure, Analyze, Improve, and Control.

Figure 3-2 Six Sigma DMAIC

Six Sigma DMADV shows a cycle of five processes involving Define, Measure, Analyze, Design, and Verify.

Figure 3-3 Six Sigma DMADV

Capability Maturity Model Integration (CMMI)

Capability Maturity Model Integration (CMMI) is a process improvement approach that addresses three areas of interest: product and service development (CMMI for development), service establishment and management (CMMI for services), and product service and acquisition (CMMI for acquisitions). CMMI has five levels of maturity for processes: Level 1 Initial, Level 2 Managed, Level 3 Defined, Level 4 Quantitatively Managed, and Level 5 Optimizing. All processes within each level of interest are assigned one of the five levels of maturity.

CCTA Risk Analysis and Management Method (CRAMM)

CRAMM is a qualitative risk analysis and management tool developed by the UK government’s Central Computer and Telecommunications Agency (CCTA). A CRAMM review includes three steps:

Step 1. Identify and value assets.

Step 2. Identify threats and vulnerabilities and calculate risks.

Step 3. Identify and prioritize countermeasures.

Note

No organization will implement all the aforementioned frameworks or methodologies. Security professionals should help their organization pick the framework that best fits the needs of the organization.

Extreme Scenario Planning/Worst-Case Scenario

In any security planning, an organization must perform extreme scenario or worst-case scenario planning. This planning ensures that an organization anticipates catastrophic events before they occur and can put in place the appropriate plans.

Images

The first step in worst-case scenario planning is to analyze all the threats to identify all the actors that pose significant threats to the organization. Examples of the threat actors include both internal and external actors, such as the following:

  • Internal actors

    • Reckless employee

    • Untrained employee

    • Partner

    • Disgruntled employee

    • Internal spy

    • Government spy

    • Vendor

    • Thief

  • External actors

    • Anarchist

    • Competitor

    • Corrupt government official

    • Data miner

    • Government cyber warrior

    • Irrational individual

    • Legal adversary

    • Mobster

    • Activist

    • Terrorist

    • Vandal

These actors can be subdivided into two categories: non-hostile and hostile. Of the lists given above, three actors are usually considered non-hostile: reckless employee, untrained employee, and partner. All the other actors should be considered hostile.

Images

The organization then needs to analyze each of these threat actors according to set criteria. Every threat actor should be given a ranking to help determine which threat actors will be analyzed. Examples of some of the most commonly used criteria include the following:

  • Skill level: None, minimal, operational, adept

  • Resources: Individual, team, organization, government

  • Limits: Code of conduct, legal, extra-legal (minor), extra-legal (major)

  • Visibility: Overt, covert, clandestine, don’t care

  • Objective: Copy, destroy, injure, take, don’t care

  • Outcome: Acquisition/theft, business advantage, damage, embarrassment, technical advantage

With these criteria, the organization must then determine which of the actors it wants to analyze. For example, the organization may choose to analyze all hostile actors that have a skill level of adept, resources of organization or government, and limits of extra-legal (minor) or extra-legal (major). Then the list is consolidated to include only the threat actors that fit all of these criteria.

Next, the organization must determine what it really cares about protecting. Most often this determination is made using the FIPS 199 method or some sort of business impact analysis. Once the vital assets are determined, the organization should select the scenarios that could have a catastrophic impact on the organization by using the objective and outcome values from the threat actor analysis and the asset value and business impact information from the impact analysis.

Scenarios must then be made so that they can be fully analyzed. For example, an organization may decide to analyze a situation in which a hacktivist group performs prolonged denial-of-service attacks, causing sustained outages to damage an organization’s reputation. Then a risk determination should be made for each scenario. Risk determination is discussed later in this chapter.

Once all the scenarios are determined, the organization needs to develop an attack tree for each scenario. This attack tree should include all the steps and/or conditions that must occur for the attack to be successful. The organization must then map security controls to the attack trees.

To determine the security controls that can be used, an organization would need to look at industry standards, including NIST SP 800-53 (see http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-53r4.pdf) (discussed later in this chapter) and SANS 20 Critical Security Controls for Effective Cyber Defense (http://www.sans.org/critical-security-controls/). Finally, the controls would be mapped back into the attack tree to ensure that they are implemented at as many levels of the attack as possible.

As you can see, worst-case scenario planning is an art and requires extensive training and effort to ensure success. For the CASP exam, candidates should focus more on the process and steps required than on how to perform the analysis and create the scenario documentation.

Conduct System-Specific Risk Analysis

A risk assessment is a tool used in risk management to identify vulnerabilities and threats, assess the impact of those vulnerabilities and threats, and determine which controls to implement. Risk assessment or analysis has four main goals:

  • Identify assets and asset value.

  • Identify vulnerabilities and threats.

  • Calculate threat probability and business impact.

  • Balance threat impact with countermeasure cost.

Prior to starting a risk assessment, management and the risk assessment team must determine which assets and threats to consider. This process determines the size of the project. The risk assessment team must then provide a report to management on the value of the assets considered. Management can then review and finalize the asset list, adding and removing assets as it sees fit, and then determine the budget of the risk assessment project.

Let’s look at a specific scenario to help understand the importance of system-specific risk analysis. In our scenario, the Sales division decides to implement touchscreen technology and tablet computers to increase productivity. As part of this new effort, a new sales application will be developed that works with the new technology. At the beginning of the deployment, the chief security officer (CSO) attempted to prevent the deployment because the technology is not supported in the enterprise. Upper management decided to allow the deployment. The CSO should work with the Sales division and other areas involved so that the risk associated with the full life cycle of the new deployment can be fully documented and appropriate controls and strategies can be implemented during deployment.

Risk assessment should be carried out before any mergers and acquisitions occur or new technology and applications are deployed.

If a risk assessment is not supported and directed by senior management, it will not be successful. Management must define the purpose and scope of a risk assessment and allocate the personnel, time, and monetary resources for the project.

Make Risk Determination Based upon Known Metrics

To make a risk determination, an organization must perform a formal risk analysis. A formal risk analysis often asks questions such as these: What corporate assets need to be protected? What are the business needs of the organization? What outside threats are most likely to compromise network security?

Different types of risk analysis, including qualitative risk analysis and quantitative risk analysis, should be used to ensure that the data obtained is maximized.

Qualitative Risk Analysis

Qualitative risk analysis does not assign monetary and numeric values to all facets of the risk analysis process. Qualitative risk analysis techniques include intuition, experience, and best practice techniques, such as brainstorming, focus groups, surveys, questionnaires, meetings, interviews, and Delphi. The Delphi technique is a method used to estimate the likelihood and outcome of future events. Although all these techniques can be used, most organizations will determine the best technique(s) based on the threats to be assessed. Experience and education on the threats are needed.

Each member of the group who has been chosen to participate in the qualitative risk analysis uses his or her experience to rank the likelihood of each threat and the damage that might result. After each group member ranks the threat possibility, loss potential, and safeguard advantage, data is combined in a report to present to management.

Two advantages of qualitative over quantitative risk analysis are that qualitative prioritizes the risks and identifies areas for immediate improvement in addressing the threats. Disadvantages of qualitative risk analysis include the following: All results are subjective, and a dollar value is not provided for cost/benefit analysis or for budget help.

Note

When performing risk analyses, all organizations experience issues with any estimate they obtain. This lack of confidence in an estimate is referred to as uncertainty and is expressed as a percentage. Any reports regarding a risk assessment should include the uncertainty level.

Quantitative Risk Analysis

A quantitative risk analysis assigns monetary and numeric values to all facets of the risk analysis process, including asset value, threat frequency, vulnerability severity, impact, and safeguard costs. Equations are used to determine total and residual risks.

An advantage of quantitative over qualitative risk analysis is that quantitative uses less guesswork than qualitative. Disadvantages of quantitative risk analysis include the difficulty of the equations, the time and effort needed to complete the analysis, and the level of data that must be gathered for the analysis.

Most risk analysis includes some hybrid of both quantitative and qualitative risk analyses. Most organizations favor using quantitative risk analysis for tangible assets and qualitative risk analysis for intangible assets.

Keep in mind that even though quantitative risk analysis uses numeric value, a purely quantitative analysis cannot be achieved because some level of subjectivity is always part of the data. This type of estimate should be based on historical data, industry experience, and expert opinion.

Magnitude of Impact Based on ALE and SLE

Risk impact or magnitude of impact is an estimate of how much damage a negative risk can have or the potential opportunity cost if a positive risk is realized. Risk impact can be measured in financial terms (quantitative) or with a subjective measurement scale (qualitative). Risks usually are ranked on a scale that is determined by the organization. High-level risks result in significant loss, and low-level risks result in negligible losses.

If magnitude of impact can be expressed in financial terms, use of financial value to quantify the magnitude has the advantage of being easily understood by personnel. The financial impact might be long-term costs in operations and support, loss of market share, short-term costs in additional work, or opportunity cost.

Two calculations are used when determining the magnitude of impact: single loss expectancy (SLE) and annualized loss expectancy (ALE).

SLE
Images

The SLE is the monetary impact of each threat occurrence. To determine the SLE, you must know the asset value (AV) and the exposure factor (EF). The EF is the percent value or functionality of an asset that will be lost when a threat event occurs. The calculation for obtaining the SLE is as follows:

SLE = AV × EF

For example, say that an organization has a web server farm with an AV of $20,000. If the risk assessment has determined that a power failure is a threat agent for the web server farm and the exposure factor for a power failure is 25%, the SLE for this event equals $5,000.

ALE
Images

The ALE is the expected risk factor of an annual threat event. To determine the ALE, you must know the SLE and the annualized rate of occurrence (ARO). (Note that ARO is explained later in this chapter, in the “Likelihood of Threat” section.) The calculation for obtaining the ALE is as follows:

ALE = SLE × ARO

Using the previously mentioned example, if the risk assessment has determined that the ARO for the power failure of the web server farm is 50%, the ALE for this event equals $2,500.

Using the ALE, the organization can decide whether to implement controls. If the annual cost of a control to protect the web server farm is more than the ALE, the organization could easily choose to accept the risk by not implementing the control. If the annual cost of the control to protect the web server farm is less than the ALE, the organization should consider implementing the control.

Likelihood of Threat

The likelihood of threat is a measurement of the chance that a particular risk event will impact the organization. When the vulnerabilities and threats have been identified, the loss potential for each must be determined. This loss potential is determined by using the likelihood of the event combined with the impact that such an event would cause. An event with a high likelihood and a high impact would be given more importance than an event with a low likelihood and a low impact. The chance of natural disasters will vary based on geographic location. However, the chances of human-made risks are based more on organizational factors, including visibility, location, technological footprint, and so on. The levels used for threat likelihood are usually high, moderate, and low.

The likelihood that an event will occur is usually determined by examining the motivation, source, ARO, and trend analysis.

Motivation

Motivation is what causes organizations and their attackers to act. Not all risks that an organization identifies will have motivation. For example, natural disasters have no motivation or reasoning behind their destruction other than climatic or other natural conditions that are favorable to them coming into being.

However, most human-made attacks have motivations. These motivations are usually similar to the outcomes discussed earlier in this chapter, in the “Extreme Scenario Planning/Worst-Case Scenario” section. If your organization identifies any risks that are due to the actions of other people or organizations, these risks are usually motivated by the following:

  • Acquisition/theft

  • Business advantage

  • Damage

  • Embarrassment

  • Technical advantage

Understanding the motivation behind these risks is vital to determining which risk strategy your organization should employ.

Source

As discussed earlier in this chapter, in the “Extreme Scenario Planning/Worst-Case Scenario” section, the sources of organizational risks can fall into several broad categories. Internal sources are those within an organization, and external sources are those outside the organization. These two categories can be further divided into hostile and non-hostile sources. For example, an improperly trained employee might inadvertently be susceptible to a social engineering attack, but a disgruntled employee may intentionally sabotage organizational assets.

When an organization understands the source and motivation behind the risk, the attack route and mechanism can be better analyzed to help determine which controls could be employed to minimize the risk.

ARO
Images

The annualized rate of occurrence (ARO) is an estimate of how often a given threat might occur annually. Remember that an estimate is only as good as the certainty of the estimate. It might be possible to obtain the ARO internally just by examining logs and archive information. If you do not have access to this type of internal information, consult with subject matter experts (SMEs), industry experts, organizational standards and guidelines, and other authoritative resources to ensure that you obtain the best estimate for your calculations.

Trend Analysis

In risk management, it is sometimes necessary to identify trends. In this process, historical data is utilized, given a set of mathematical parameters, and then processed in order to determine any possible variance from an established baseline.

If you do not know the established baseline, you cannot identify any variances from the baseline and track trends in these variances. Organizations should establish procedures for capturing baseline statistics and for regularly comparing current statistics against the baselines. Also, organizations must recognize when new baselines should be established. For example, if your organization implements a two-server web farm, the baseline would be vastly different than the baseline if that farm were upgraded to four servers or if the internal hardware in the servers were upgraded.

Security professionals must also research growing trends worldwide, especially in the industry in which the organization exists. Financial industry risk trends vary from healthcare industry risk trends, but there are some common areas that both industries must understand. For example, any organizations that have ecommerce sites must understand the common risk trends and be able to analyze their internal sites to determine whether their resources are susceptible to these risks.

Return on Investment (ROI)

The term return on investment (ROI) refers to the money gained or lost after an organization makes an investment. ROI is a necessary metric for evaluating security investments.

ROI measures the expected improvement over the status quo against the cost of the action required to achieve the improvement. In the security field, improvement is not really the goal. Reduction in risk is the goal. But it is often hard to determine exactly how much an organization will save if it makes an investment. Some of the types of loss that can occur include:

  • Productivity loss: This includes downtime and repair time. If personnel are not performing their regular duties because of a security issue, your organization has experienced a productivity loss.

  • Revenue loss during outage: If an asset is down and cannot be accessed, the organization loses money with each minute and hour that the asset is down. That is increased exponentially if an organization’s Internet connection goes down because that affects all organizational assets.

  • Data loss: If data is lost, it must be restored, which ties back to productivity loss because personnel must restore the data backup. However, organizations must also consider conditions where backups are destroyed, which could be catastrophic.

  • Data compromise: This includes disclosure or modification. Measures must be taken to ensure that data, particularly intellectual data, is protected.

  • Repair costs: This includes costs to replace hardware or costs incurred to employ services from vendors.

  • Loss of reputation: Any security incident that occurs can result in a loss of reputation with your organization’s partners and customers. Recent security breaches at popular retail chains have resulted in customer reluctance to trust the stores with their data.

Let’s look at a scenario to better understand how ROI can really help with the risk analysis process. Suppose two companies are merging. One company uses mostly hosted services from an outside vendor, while the other uses mostly in-house products. When the merging project is started, the following goals for the merged systems are set:

  • Ability to customize systems at the department level

  • Quick implementation along with an immediate ROI

  • Administrative-level control over all products by internal IT staff

The project manager states that the in-house products are the best solution. Because of staff shortages, the security administrator argues that security will be best maintained by continuing to use outsourced services. The best way to resolve this issue is to:

Step 1. Calculate the time to deploy and support the in-sourced systems for the staff shortage.

Step 2. Compare the costs to the ROI costs minus outsourcing costs.

Step 3. Present the document numbers to management for a final decision.

When calculating ROI, there is a degree of uncertainty and subjectivity involved, but once you decide what to measure and estimate, the question of how to measure it should be somewhat easier. The most effective measures are likely to be those you already are using because they enable you to compare security projects with all other projects. Two popular methods are payback and net present value (NPV).

Payback
Images

Payback is a simple calculation that compares ALE against the expected savings as a result of an investment. Let’s use the earlier example of the server that results in a $2,500 ALE. The organization may want to deploy a power backup if it can be purchased for less than $2,500. However, if that power backup costs a bit more, the organization might be willing to still invest in the device if it were projected to provide protection for more than one year with some type of guarantee.

Net Present Value (NPV)
Images

Net present value (NPV) adds another dimension to payback by considering the fact that money spent today is worth more than savings realized tomorrow. In the example above, the organization may purchase a power backup that comes with a five-year warranty. To calculate NPV, you need to know the discount rate, which determines how much less money is worth in the future. For our example, we’ll use a discount rate of 10%. Now to the calculation: You divide the yearly savings ($2,500) by 1.1 (that is 1 plus the discount rate) to the power of the number of year you want to analyze. So this is what the calculation would look like for the first year:

NPV = $2,500 / (1.1) = $2,272.73

The result is the savings expected in today’s dollar value. For each year, you could then recalculate NPV by raising the 1.1 value to the year number. The calculation for the second year would be:

NPV = $2,500 / (1.1)2 = $2,066.12

If you’re trying to weigh costs and benefits, and the costs are immediate but the benefits are long term, NPV can provide a more accurate measure of whether a project is truly worthwhile.

Total Cost of Ownership

Organizational risks are everywhere and range from easily insurable property risks to risks that are hard to anticipate and calculate, such as the loss of a key employee. The total cost of ownership (TCO) of risk measures the overall costs associated with running the organizational risk management process, including insurance premiums, finance costs, administrative costs, and any losses incurred. This value should be compared to the overall company revenues and asset base. TCO provides a way to assess how an organization’s risk-related costs are changing compared to the overall organization growth rate. This TCO can also be compared to industry baselines that are available from trade groups and industry organizations. Working with related business and industry experts ensures that your organization is obtaining relevant and comparable risk-related data. For example, a financial organization should not compare its risk TCO to TCOs of organizations in the healthcare field.

Calculating risk TCO has many advantages. It can help organizations discover inconsistencies in their risk management approach. It can also identify areas where managing a particular risk is excessive compared to similar risks managed elsewhere. Risk TCO can also generate direct cost savings by highlighting risk management process inefficiency.

However, comparable risk TCO is often difficult to find because many direct competitors protect this sensitive data. Relying on trade bodies and industry standards bodies can often help alleviate this problem. Also, keep in mind the risk that TCO may be seen as a cost-cutting activity, resulting in personnel not fully buying in to the process.

Some of the guidelines an organization should keep in mind when determining risk TCO are as follows:

  • Determine a framework that will be used to break down costs into categories, including risk financing, risk administration, risk compliance costs, and self-insured losses.

  • Identify the category costs by expressing them as a percentage of overall organizational revenue.

  • Employ any data from trade bodies for comparison with each category’s figures.

  • Analyze any differences between your organization’s numbers and industry figures for reasons of occurrence.

  • Set future targets for each category.

When calculating and analyzing risk TCO, you should remember these basic rules:

  • Industry benchmarks may not always be truly comparable to your organization’s data.

  • Cover some minor risks within the organization.

  • Employ risk management software to aid in the decision making because of the complex nature of risk management.

  • Remember the value of risk management when budgeting. It is not merely a cost.

  • Risk TCO does not immediately lead to cost savings. Savings occur over time.

  • Not all possible solutions will rest within the organization. External specialists and insurance brokers may be needed.

Translate Technical Risks in Business Terms

Technical cybersecurity risks represent a threat that is largely misunderstood by nontechnical personnel. Security professionals must bridge the knowledge gap in a manner that the stakeholders understand. To properly communicate technical risks, security professionals must first understand their audience and then be able to translate those risks into business terms that the audience understands.

The audience that needs to understand the technical risks includes semi-technical audiences, nontechnical leadership, the board of directors and executives, and regulators. The semi-technical audience understands the security operations difficulties and often consists of powerful allies. Typically, this audience needs a data-driven, high-level message based on verifiable facts and trends. The nontechnical leadership audience needs the message to be put in context with their responsibilities. This audience needs the cost of cybersecurity expenditures to be tied to business performance. Security professionals should present metrics that show how cyber risk is trending without using popular jargon. The board of directors and executives are primarily concerned with business risk management and managing return on assets. The message to this group should translate technical risk into common business terms and present metrics about cybersecurity risk and performance.

Finally, when communicating with regulators, it is important to be thorough and transparent. In addition, organizations may want to engage a third party to do a gap assessment before an audit. This will help security professionals find and remediate weaknesses prior to the audit and enables the third party to speak on behalf of the security program.

To frame the technical risks into business terms for these audiences, security professionals should focus on business disruption, regulatory issues, and bad press. If a company’s database is attacked and, as a result, the website cannot sell products to customers, this is a significant disruption of business operations. If an incident occurs that results in a regulatory investigation and fines, a regulatory issue has arisen. Bad press can result in lost sales and costs to repair the organization’s image.

Security professionals must understand the risk metrics and what each metric costs the organization. Although security professionals may not definitively know the return on investment (ROI), they should take the security incident frequency at the organization and assign costs in terms of risk exposure for every risk. It will also be helpful to match the risks with the assets protected to make sure the organization’s investment is protecting the most valuable assets.

Recommend Which Strategy Should Be Applied Based on Risk Appetite

Risk reduction is the process of altering elements of the organization in response to risk analysis. After an organization understands the ROI and TCO, it must determine how to handle the risk, which is based on the organization’s risk appetite, or how much risk the organization can withstand on its own.

Images

The four basic strategies you must understand for the CASP exam are avoid, transfer, mitigate, and accept.

Avoid

The avoid strategy involves terminating an activity that causes a risk or choosing an alternative that is not as risky. Unfortunately, this method cannot be used against all threats. An example of avoidance is organizations utilizing alternate data centers in different geographic locations to prevent a natural disaster from affecting both facilities.

Many times it is impossible to avoid risk. For example, if a CEO purchases a new mobile device and insists that he be given internal network access via this device, avoiding the risk is impossible. In this case, you would need to find a way to mitigate and/or transfer the risk.

Consider the following scenario: A company is in negotiations to acquire another company for $1,000,000. Due diligence activities have uncovered systemic security issues in the flagship product of the company being purchased. A complete product rewrite because of the security issues is estimated to cost $1,500,000. In this case, the company should not acquire the other company because the acquisition would actually end up costing $2,500,000.

Transfer

The transfer strategy involves passing the risk on to a third party, such as an insurance company. An example is to outsource certain functions to a provider, usually involving an SLA with a third party. However, the risk could still rest with the original organization, depending on the provisions in the contract. If your organization plans to use this method, legal counsel should ensure that the contract provides the level of protection needed.

Consider the following scenario: A small business has decided to increase revenue by selling directly to the public through an online system. Initially this will be run as a short-term trial. If it is profitable, the system will be expanded and form part of the day-to-day business. Two main business risks for the initial trial have been raised:

  • Internal IT staff have no experience with secure online credit card processing.

  • An internal credit card processing system will expose the business to additional compliance requirements.

In this situation, it is best to transfer the initial risks by outsourcing payment processing to a third-party service provider.

Mitigate

The mitigate strategy involves defining the acceptable risk level the organization can tolerate and reduces the risk to that level. This is the most common strategy employed. This strategy includes implementing security controls, including IDSs, IPSs, firewalls, and so on.

Consider the following scenario: Your company’s web server experiences a security incident three times a year, costing the company $1,500 in downtime per occurrence. The web server is only for archival access and is scheduled to be decommissioned in five years. The cost of implementing software to prevent this incident would be $15,000 initially, plus $1,000 a year for maintenance. The cost of the security incident is calculated as follows:

($1,500 per occurrence × 3 per year) × 5 years = $22,500

The cost to prevent the problem is calculated as follows:

$15,000 software cost + ($1,000 maintenance × 5 years) = $20,000

In this situation, mitigation (implementing the software) is cheaper than accepting the risk.

Accept

The accept strategy involves understanding and accepting the level of risk as well as the cost of damages that can occur. This strategy is usually used to cover residual risk, which is discussed later in this chapter. It is usually employed for assets that have small exposure or value.

However, sometimes an organization has to accept risks because the budget that was originally allocated for implementing controls to protect against risks is depleted. Accepting the risk is fine if the risks and the assets are not high profile. However, if they are considered high-profile risks, management should be informed of the need for another financial allocation to mitigate the risks.

Risk Management Processes

Images

According to NIST SP 800-30 Rev. 1, common information-gathering techniques used in risk analysis include automated risk assessment tools, questionnaires, interviews, and policy document reviews. Keep in mind that multiple sources should be used to determine the risks to a single asset. NIST SP 800-30 identifies the following steps in the risk assessment process:

Step 1. Prepare for the assessment.

Step 2. Conduct the assessment.

  • Identify threat sources and events.

  • Identify vulnerabilities and predisposing conditions.

  • Determine the likelihood of occurrence.

  • Determine the magnitude of the impact.

  • Determine risk as a combination of likelihood and impact.

Step 3. Communicate the results.

Step 4. Maintain the assessment.

Figure 3-4 shows the risk assessment process according to NIST SP 800-30.

The risk assessment process according to NIST SP 800-30 is shown.

Figure 3-4 NIST SP 800-30 Risk Assessment Process
Reprinted courtesy of the National Institute of Standards and Technology, U.S. Department of Commerce. Not copyrightable in the United States.

The risk management process includes asset valuation and vulnerabilities and threat identification. Security professionals must also understand exemptions, deterrence, inherent risk, and residual risk.

Information and Asset (Tangible/Intangible) Value and Costs

As stated earlier, the first step of any risk assessment is to identify the assets and determine the asset values. Assets are both tangible and intangible. Tangible assets include computers, facilities, supplies, and personnel. Intangible assets include intellectual property, data, and organizational reputation. The value of an asset should be considered in respect to the asset owner’s view. These six considerations can be used to determine an asset’s value:

  • Value to owner

  • Work required to develop or obtain the asset

  • Costs to maintain the asset

  • Damage that would result if the asset were lost

  • Cost that competitors would pay for the asset

  • Penalties that would result if the asset were lost

After determining the value of the assets, you should determine the vulnerabilities and threats to each asset.

Vulnerabilities and Threats Identification

Images

When determining vulnerabilities and threats to an asset, considering the threat agents first is often easiest. Threat agents can be grouped into the following six categories:

  • Human: This category includes both malicious and non-malicious insiders and outsiders, terrorists, spies, and terminated personnel.

  • Natural: This category includes floods, fires, tornadoes, hurricanes, earthquakes, and other natural disasters or weather events.

  • Technical: This category includes hardware and software failure, malicious code, and new technologies.

  • Physical: This category includes CCTV issues, perimeter measures failure, and biometric failure.

  • Environmental: This category includes power and other utility failures, traffic issues, biological warfare, and hazardous material issues (such as spillage).

  • Operational: This category includes any process or procedure that can affect CIA.

These categories should be used along with the threat actors identified in the “Extreme Scenario Planning/Worst-Case Scenario” section earlier in this chapter, to help your organization develop the most comprehensive list of threats possible.

Exemptions

While most organizations should complete a thorough risk analysis and take measures to protect against all risks, some organizations have exemptions from certain types of risks due to the nature of their business and government standards.

For example, the U.S. Environmental Protection Agency (EPA) has regulations regarding the use and storage of certain chemicals, such as ammonia and propane. Organizations that store quantities of these chemicals above a certain limit are required to follow the EPA’s Accidental Release Prevention provisions and Risk Management Program regulations. However, most farmers who need ammonia as a soil nutrient are not subject to these regulations. Neither are propane retail facilities.

In most cases, organizations should employ legal counsel to ensure that they understand any exemptions that they think apply to them.

Deterrence

Deterrence is the use of the threat of punishment to deter persons from committing certain actions. Many government agencies employ this risk management method by posting legal statements in which unauthorized users are threatened with fines and/or imprisonment if the unauthorized users gain access to their network or systems. Organizations employ similar methods that include warnings when accessing mail systems, ecommerce systems, or other systems that may contain confidential data.

Inherent

Inherent risk is risk that has no mitigation factors or treatments applied to it because it is virtually impossible to avoid. Consider an attacker who is determined and has the skills to physically access an organization’s facility. While many controls, including guards, CCTV, fencing, locks, and biometrics, can be implemented to protect against this threat, an organization cannot truly ensure that this risk will never occur if the attacker has the level of skills needed. This does not mean that the organization should not implement these controls, which are considered baseline controls.

When possible, inherent risks should be identified for the following reasons:

  • Knowing the risks helps identify critical controls.

  • Audits can then be focused on critical controls.

  • Inherent risks that have potential catastrophic consequences can be subjected to more stringent scenario testing.

  • The board and management of the organization can be made aware of risks that may have potentially catastrophic consequences.

Residual

No matter how careful an organization is, it is impossible to totally eliminate all risks. Residual risk is the level of risk that remains after safeguards or controls have been implemented. Residual risk is represented using the following equation:

Residual risk = Total risk – Countermeasures

This equation is considered to be conceptual in nature rather than useful for actual calculation.

Continuous Improvement/Monitoring

Continuous improvement and monitoring of risk management are vital to any organization. To ensure continuous improvement, all changes to the enterprise must be tracked so that security professionals can assess the risks that those changes bring. Security controls should be configured to address the changes as close to the deployment of the changes as possible. For example, if your organization decides to upgrade a vendor application, security professionals must assess the application to see how it affects enterprise security.

Certain elements within the organization should be automated to help with the continuous improvements and monitoring, including audit log collection and analysis, antivirus and malware detection updates, and application and operating system updates.

Continuous monitoring involves change management, configuration management, control monitoring, and status reporting. Security professionals should regularly evaluate the enterprise security controls to ensure that changes do not negatively impact the enterprise.

Management should adopt a common risk vocabulary and must clearly communicate expectations. In addition, employees, including new hires, must be given training to ensure that they fully understand risk as it relates to the organization.

Business Continuity Planning

Continuity planning deals with identifying the impact of any disaster and ensuring that a viable recovery plan for each function and system is implemented. Its primary focus is how to carry out the organizational functions when a disruption occurs.

A business continuity plan (BCP) considers all aspects that are affected by a disaster, including functions, systems, personnel, and facilities. It lists and prioritizes the services needed, particularly the telecommunications and IT functions.

Business Continuity Scope and Plan

As you already know, creating a BCP is vital to ensuring that the organization can recover from a disaster or a disruptive event. Several groups have established standards and best practices for business continuity. These standards and best practices include many common components and steps.

The following sections cover the personnel components, the project scope, and the business continuity steps that must be completed.

Personnel Components

Senior management are the most important personnel in the development of the BCP. Senior management support of business continuity and disaster recovery drives the overall organizational view of the process. Without senior management support, this process will fail.

Senior management set the overall goals of business continuity and disaster recovery. A business continuity coordinator named by senior management should lead the BCP committee. The committee develops, implements, and tests the BCP and disaster recovery plan (DRP). The BCP committee should include a representative from each business unit. At least one member of senior management should be part of this committee. In addition, the organization should ensure that the IT department, legal department, security department, and communications department are represented because of the vital roles these departments play during and after a disaster.

With management direction, the BCP committee must work with business units to ultimately determine the business continuity and disaster recovery priorities. Senior business unit managers are responsible for identifying and prioritizing time-critical systems. After all aspects of the plans have been determined, the BCP committee should be tasked with regularly reviewing the plans to ensure that they remain current and viable. Senior management should closely monitor and control all business continuity efforts and publicly praise any successes.

After an organization gets into disaster recovery planning, other teams are involved.

Project Scope

To ensure that the development of the BCP is successful, senior management must define the BCP scope. A business continuity project with an unlimited scope can often become too large for the BCP committee to handle correctly. For this reason, senior management might need to split the business continuity project into smaller, more manageable pieces.

When considering the splitting of the BCP into pieces, an organization might want to split the pieces based on geographic location or facility. However, an enterprisewide BCP should be developed to ensure compatibility of the individual plans.

Business Continuity Steps

Many organizations have developed standards and guidelines for performing business continuity and disaster recovery planning. One of the most popular standards is NIST SP 800-34 Revision 1 (Rev. 1).

Images

The following list summarizes the steps in SP 800-34 Rev. 1:

Step 1. Develop contingency planning policy.

Step 2. Conduct business impact analysis (BIA).

Step 3. Identify preventive controls.

Step 4. Create contingency strategies.

Step 5. Develop an information system contingency plan.

Step 6. Test, train, and exercise.

Step 7. Maintain the plan.

Figure 3-5 shows a more detailed list of the tasks included in SP 800-34 Rev. 1.

The tasks included in NIST SP 800-34 R1 are listed as follows: Develop Contingency Planning Policy; Conduct Business Impact Analysis; Identify Preventive Controls; Create Contingency Strategies; Develop Contingency Plan; Plan Testing, Training, and Exercises; and Plan Maintenance.

Figure 3-5 NIST SP 800-34 R1
Reprinted courtesy of the National Institute of Standards and Technology, U.S. Department of Commerce. Not copyrightable in the United States.

NIST 800-34 R1 includes the following types of plans that should be included during contingency planning:

  • Business continuity plan (BCP): Focuses on sustaining an organization’s mission/business processes during and after a disruption

  • Continuity of operations plan (COOP): Focuses on restoring an organization’s mission essential functions (MEF) at an alternate site and performing those functions for up to 30 days before returning to normal operations

  • Crisis communications plan: Documents standard procedures for internal and external communications in the event of a disruption using a crisis communications plan. It also provides various formats for communications appropriate to the incident

  • Critical infrastructure protection (CIP) plan: A set of policies and procedures that serve to protect and recover these assets and mitigate risks and vulnerabilities

  • Cyber incident response plan: Establishes procedures to address cyber attacks against an organization’s information system(s)

  • Disaster recovery plan (DRP): An information system–focused plan designed to restore operability of the target system, application, or computer facility infrastructure at an alternate site after an emergency

  • Information system contingency plan (ISCP): Provides established procedures for the assessment and recovery of a system following a system disruption

  • Occupant emergency plan: Outlines first-response procedures for occupants of a facility in the event of a threat or an incident to the health and safety of personnel, the environment, or property

Develop Contingency Planning Policy

The contingency planning policy statement should define the organization’s overall contingency objectives and establish the organizational framework and responsibilities for system contingency planning. To be successful, senior management, most likely the CIO, must support a contingency program and be included in the process to develop the program policy. The policy must reflect the FIPS 199 impact levels and the contingency controls that each impact level establishes. Key policy elements are as follows:

  • Roles and responsibilities

  • Scope as it applies to common platform types and organization functions (for example, telecommunications, legal, media relations) subject to contingency planning

  • Resource requirements

  • Training requirements

  • Exercise and testing schedules

  • Plan maintenance schedule

  • Minimum frequency of backups and storage of backup media

Conduct the BIA

The purpose of the BIA is to correlate the system with the critical mission/business processes and services provided and, based on that information, characterize the consequences of a disruption.

The development of a BCP depends most on the development of the BIA. The BIA helps an organization understand what impact a disruptive event would have on the organization. It is a management-level analysis that identifies the impact of losing an organization’s resources.

Images

The four main steps of the BIA are as follows:

Step 1. Identify critical processes and resources.

Step 2. Identify outage impacts and estimate downtime.

Step 3. Identify resource requirements.

Step 4. Identify recovery priorities.

The BIA relies heavily on any vulnerability analysis and risk assessment that has been completed. The vulnerability analysis and risk assessment may be performed by the BCP committee or by a separately appointed risk assessment team.

Identify Critical Processes and Resources

When identifying the critical processes and resources of an organization, the BCP committee must first identify all the business units or functional areas within the organization. After all units have been identified, the BCP team should select which individuals will be responsible for gathering all the needed data and select how to obtain the data.

These individuals will gather the data using a variety of techniques, including questionnaires, interviews, and surveys. They might also actually perform a vulnerability analysis and risk assessment or use the results of these tests as input for the BIA.

During the data gathering process, the organization’s business processes and functions and the resources on which these processes and functions depend should be documented. This list should include all business assets, including physical and financial assets that are owned by the organization, as well as any assets that provide competitive advantage or credibility.

Identify Outage Impacts and Estimate Downtime

After determining all the business processes, functions, and resources, the organization should determine the criticality level of each resource.

As part of determining how critical an asset is, you need to understand the following terms:

  • Maximum tolerable downtime (MTD): The maximum amount of time that an organization can tolerate a single resource or function being down. This is also referred to as maximum period time of disruption (MPTD).

  • Mean time to repair (MTTR): The average time required to repair a single resource or function when a disaster or disruption occurs.

  • Mean time between failures (MTBF): The estimated amount of time a device will operate before a failure occurs. This amount is calculated by the device vendor. System reliability is increased by a higher MTBF and lower MTTR.

  • Recovery time objective (RTO): The shortest time period after a disaster or disruptive event within which a resource or function must be restored to avoid unacceptable consequences. RTO assumes that an acceptable period of downtime exists. RTO should be smaller than MTD.

  • Work recovery time (WRT): The difference between RTO and MTD, which is the remaining time that is left over after the RTO before reaching the maximum tolerable.

  • Recovery point objective (RPO): The point in time to which the disrupted resource or function must be returned.

Note

The outage terms covered above can also be used in SLAs, as discussed in Chapter 2, “Security, Privacy Policies, and Procedures.”

Each organization must develop its own documented criticality levels. Organizational resource and function criticality levels include critical, urgent, important, normal, and nonessential. Critical resources are the resources that are most vital to the organization’s operation and that should be restored within minutes or hours of the disaster or disruptive event. Urgent resources should be restored in 24 hours but are not considered as important as critical resources. Important resources should be restored in 72 hours but are not considered as important as critical or urgent resources. Normal resources should be restored in 7 days but are not considered as important as critical, urgent, or important resources. Nonessential resources should be restored within 30 days.

Each process, function, and resource must have its criticality level defined to act as an input into the DRP. If critical priority levels are not defined, a DRP might not be operational within the organization’s time frame for recovery.

Identify Resource Requirements

After the criticality level of each function and resource is determined, you need to determine all the resource requirements for each function and resource. For example, an organization’s accounting system might rely on a server that stores the accounting application, another server that holds the database, various client systems that perform the accounting tasks over the network, and the network devices and infrastructure that support the system. Resource requirements should also consider any human resources requirements. When human resources are unavailable, the organization can be just as negatively impacted as when technological resources are unavailable.

The organization must document the resource requirements for every resource that would need to be restored when the disruptive event occurs—including device name, operating system or platform version, hardware requirements, and device interrelationships.

Identify Recovery Priorities

After all the resource requirements have been identified, the organization must identify the recovery priorities. It can establish recovery priorities by taking into consideration process criticality, outage impacts, tolerable downtime, and system resources. After all this information is compiled, the result is an information system recovery priority hierarchy.

Three main levels of recovery priorities should be used: high, medium, and low. The BIA stipulates the recovery priorities but does not provide the recovery solutions. Those are given in the DRP.

Identify Preventive Controls

The outage impacts identified in the BIA may be mitigated or eliminated through preventive measures that deter, detect, and/or reduce impacts to the system. Where feasible and cost-effective, preventive methods are preferable to actions that may be necessary to recover the system after a disruption.

Create Contingency Strategies

Organizations are required to adequately mitigate the risk arising from use of information and information systems in the execution of mission/business processes. This includes backup methods, offsite storage, recovery, alternate sites, and equipment replacement.

Plan Testing, Training, and Exercises (TT&E)

Testing, training, and exercises for business continuity should be carried out regularly based on NIST SP 800-84. Organizations should conduct TT&E events periodically, following organizational or system changes or the issuance of new TT&E guidance, or as otherwise needed.

Maintain the Plan

To be effective, the plan must be maintained in a ready state that accurately reflects system requirements, procedures, organizational structure, and policies. As a general rule, the plan should be reviewed for accuracy and completeness at an organization-defined frequency or whenever significant changes occur to any element of the plan.

IT Governance

Within an organization, information security governance consists of several components that are used to provide comprehensive security management. Data and other assets should be protected mainly based on their value and sensitivity. Strategic plans guide the long-term security activities (3–5 years or more). Tactical plans achieve the goals of the strategic plan and are shorter in duration (6–18 months).

Because management is the most critical link in the computer security chain, management approval must be obtained early in the process of forming and adopting an information security policy. Senior management must take the following measures prior to the development of any organizational security policy:

  1. Define the scope of the security program.

  2. Identify all the assets that need protection.

  3. Determine the level of protection that each asset needs.

  4. Determine personnel responsibilities.

  5. Develop consequences for noncompliance with the security policy.

By fully endorsing an organizational security policy, senior management accepts the ownership of an organization’s security. High-level policies are statements that indicate senior management’s intention to support security.

After senior management approval has been obtained, the first step in establishing an information security program is to adopt an organizational information security statement. The organization’s security policy comes from this statement. The security planning process must define how security will be managed, who will be responsible for setting up and monitoring compliance, how security measures will be tested for effectiveness, who is involved in establishing the security policy, and where the security policy is defined.

Security professionals must understand the risk management frameworks and must ensure that organizations adhere to the appropriate risk management frameworks. They must also understand the organizational governance components and how they work together to ensure governance.

Adherence to Risk Management Frameworks

Risk frameworks can serve as guidelines to any organization that is involved in the risk analysis and management process. Organizations should use these frameworks as guides but should also feel free to customize any plans and procedures they implement to fit their needs.

NIST

To comply with the federal standard, organizations first determine the security category of their information system in accordance with FIPS 199, Standards for Security Categorization of Federal Information and Information Systems, derive the information system impact level from the security category in accordance with FIPS Publication 200, and then apply the appropriately tailored set of baseline security controls in NIST SP 800-53.

Images

The NIST risk management framework includes the following steps:

Step 1. Categorize information systems.

Step 2. Select security controls.

Step 3. Implement security controls.

Step 4. Assess security controls.

Step 5. Authorize information systems.

Step 6. Monitor security controls.

These steps are implemented in different NIST publications, including FIPS 199, SP 800-60, FIPS 200, SP 800-53, SP 800-160, SP 800-53A, SP 800-37, and SP 800-137.

Note

FIPS 199 and NIST SP 800-34 are covered earlier in this chapter.

Figure 3-6 shows the NIST risk management framework.

NIST risk management framework security life cycle is shown.

Figure 3-6 NIST Risk Management Framework
Reprinted courtesy of the National Institute of Standards and Technology, U.S. Department of Commerce. Not copyrightable in the United States.

SP 800-60 Vol. 1 Rev. 1

Security categorization is the key first step in the NIST risk management framework. FIPS 199 works with NIST SP 800-60 to identify information types, establish security impact levels for loss, and assign security categorization for the information types and for the information systems as detailed in the following process:

  1. Identify information types.

    • Identify information types based on 26 mission areas, including defense and national security, homeland security, disaster management, natural resources, energy, transportation, education, health, and law enforcement.

    • Identify management and support information based on 13 lines of business, including regulatory development, planning and budgeting, risk management and mitigation, and revenue collection.

  2. Select provisional impact levels using FIPS 199.

  3. Review provisional impact levels, and finalize impact levels.

  4. Assign system security category.

Let’s look at an example: Say that an information system used for acquisitions contains both sensitive, pre-solicitation phase contract information, and routine administrative information. The management within the contracting organization determines that:

  • For the sensitive contract information, the potential impact from a loss of confidentiality is moderate, the potential impact from a loss of integrity is moderate, and the potential impact from a loss of availability is low.

  • For the routine administrative information (non-privacy-related information), the potential impact from a loss of confidentiality is low, the potential impact from a loss of integrity is low, and the potential impact from a loss of availability is low.

The resulting security category (SC) for each of these information types is expressed as:

SC contract information = {(confidentiality, moderate), (integrity, moderate), (availability, low)}

SC administrative information = {(confidentiality, low), (integrity, low), (availability, low)}

The resulting security category of the information system is expressed as:

SC acquisition system = {(confidentiality, moderate), (integrity, moderate), (availability, low)}

This represents the high-water mark or maximum potential impact values for each security objective from the information types resident on the acquisition system.

In some cases, the impact level for a system security category will be higher than any security objective impact level for any information type processed by the system.

The primary factors that most commonly raise the impact levels of the system security category above that of its constituent information types are aggregation and critical system functionality. Other factors that can affect the impact level include public information integrity, catastrophic loss of system availability, large interconnecting systems, critical infrastructures and key resources, privacy information, and trade secrets.

The end result of NIST SP 800-60 Vol. 1 Rev 1 is security categorization documentation for every information system. These categories can be used to complete the BIA, design the enterprise architecture, design the DRP, and select the appropriate security controls.

SP 800-53 Rev. 4

NIST SP 800-53 Revision 4 is a security controls development framework developed by the NIST body of the U.S. Department of Commerce.

SP 800-53 Rev. 4 divides the controls into three classes: technical, operational, and management. Each class contains control families or categories.

The following are the NIST SP 800-53 control families:

Images
  • Access Control (AC)

  • Awareness and Training (AT)

  • Audit and Accountability (AU)

  • Security Assessment and Authorization (CA)

  • Configuration Management (CM)

  • Contingency Planning (CP)

  • Identification and Authentication (IA)

  • Incident Response (IR)

  • Maintenance (MA)

  • Media Protection (MP)

  • Physical and Environmental Protection (PE)

  • Planning (PL)

  • Program Management (PM)

  • Personnel Security (PS)

  • Risk Assessment (RA)

  • System and Services Acquisition (SA)

  • System and Communications Protection (SC)

  • System and Information Integrity (SI)

To assist organizations in making the appropriate selection of security controls for information systems, the concept of baseline controls has been introduced. Baseline controls are the starting point for the security control selection process described in SP 800-53 Rev. 4, and they are chosen based on the security category and associated impact level of information systems determined in accordance with FIPS 199 and FIPS 200, respectively. These publications recommend that the organization assigns responsibility for common controls to appropriate organizational officials and coordinates the development, implementation, assessment, authorization, and monitoring of the controls.

Images

The process in this NIST publication includes the following steps:

Step 1. Select security control baselines.

Step 2. Tailor baseline security controls.

Step 3. Document the control selection process.

Step 4. Apply the control selection process to new development and legacy systems.

Figure 3-7 shows the NIST security control selection process.

NIST security control selection process is shown.

Figure 3-7 NIST Security Control Selection Process
Reprinted courtesy of the National Institute of Standards and Technology, U.S. Department of Commerce. Not copyrightable in the United States.

NIST 800-53 Revision 5 is currently being drafted.

SP 800-160

NIST SP 800-160 defines the systems security engineering framework. It defines, bounds, and focuses the systems security engineering activities, both technical and nontechnical, toward the achievement of stakeholder security objectives and presents a coherent, well-formed, evidence-based case that those objectives have been achieved. It is shown in Figure 3-8.

NIST systems security engineering framework is shown.

Figure 3-8 NIST Systems Security Engineering Framework
Reprinted courtesy of the National Institute of Standards and Technology, U.S. Department of Commerce. Not copyrightable in the United States.

The framework defines three contexts within which the systems security engineering activities are conducted: the problem context, the solution context, and the trustworthiness context.

The problem context defines the basis for a secure system, given the stakeholder’s mission, capability, performance needs, and concerns; the constraints imposed by stakeholder concerns related to cost, schedule, risk, and loss tolerance; and other constraints associated with life cycle concepts for the system. The solution context transforms the stakeholder security requirements into system design requirements; addresses all security architecture, design, and related aspects necessary to realize a system that satisfies those requirements; and produces sufficient evidence to demonstrate that those requirements have been satisfied. The trustworthiness context is a decision-making context that provides an evidence-based demonstration, through reasoning, that the system of interest is deemed trustworthy based upon a set of claims derived from security objectives.

NIST SP 800-160 uses the same system life cycle processes that is defined in ISO/IEC 15288:2015, as shown in Figure 3-9.

NIST system life cycle processes and stages are shown.

Figure 3-9 NIST System Life Cycle Processes and Stages
Reprinted courtesy of the National Institute of Standards and Technology, U.S. Department of Commerce. Not copyrightable in the United States.

A naming convention has been established for the system life cycle processes. Each process is identified by a two-character designation. Table 3-8 provides a list of the system life cycle processes and their associated two-character designators.

Table 3-8 NIST System Life Cycle Processes and Designators

ID

Process

ID

Process

AQ

Acquisition

MS

Measurement

AR

Architecture Definition

OP

Operation

BA

Business or Mission Analysis

PA

Project Assessment and Control

CM

Configuration Management

PL

Project Planning

DE

Design Definition

PM

Portfolio Management

DM

Decision Management

QA

Quality Assurance

DS

Disposal

QM

Quality Management

HR

Human Resource Management

RM

Risk Management

IF

Infrastructure Management

SA

System Analysis

IM

Information Management

SN

Stakeholder Needs and Requirements Definition

IN

Integration

SP

Supply

IP

Implementation

SR

System Requirements Definition

KM

Knowledge Management

TR

Transition

LM

Life Cycle Model Management

VA

Validation

MA

Maintenance

VE

Verification

Each of the processes listed in Table 3-8 has a unique purpose in the life cycle, and each process has tasks associated with it.

SP 800-37 Rev. 1

NIST SP 800-37 Rev. 1 defines the tasks that should be carried out in each step of the risk management framework, as follows:

Step 1. Categorize the information system.

  • Task 1-1: Categorize the information system and document the results of the security categorization in the security plan.

  • Task 1-2: Describe the information system (including the system boundary) and document the description in the security plan.

  • Task 1-3: Register the information system with the appropriate organizational program/management offices.

Step 2. Select the security controls.

  • Task 2-1: Identify the security controls that are provided by the organization as common controls for organizational information systems and document the controls in a security plan (or equivalent document).

  • Task 2-2: Select the security controls for the information system and document the controls in the security plan.

  • Task 2-3: Develop a strategy for the continuous monitoring of security control effectiveness and any proposed or actual changes to the information system and its environment of operation.

  • Task 2-4: Review and approve the security plan.

Step 3. Implement the security controls.

  • Task 3-1: Implement the security controls specified in the security plan.

  • Task 3-2: Document the security control implementation, as appropriate, in the security plan, providing a functional description of the control implementation (including planned inputs, expected behavior, and expected outputs).

Step 4. Assess the security controls.

  • Task 4-1: Develop, review, and approve a plan to assess the security controls.

  • Task 4-2: Assess the security controls in accordance with the assessment procedures defined in the security assessment plan.

  • Task 4-3: Prepare a security assessment report documenting the issues, findings, and recommendations from the security control assessment.

  • Task 4-4: Conduct initial remediation actions on security controls based on the findings and recommendations of the security assessment report and reassess remediated control(s), as appropriate.

Step 5. Authorize the information system.

  • Task 5-1: Prepare the plan of action and milestones based on the findings and recommendations of the security assessment report, excluding any remediation actions taken.

  • Task 5-2: Assemble the security authorization package and submit the package to the authorizing official for adjudication.

  • Task 5-3: Determine the risk to organizational operations (including mission, functions, image, or reputation), organizational assets, individuals, other organizations, or the nation.

  • Task 5-4: Determine whether the risk to organizational operations, organizational assets, individuals, other organizations, or the nation is acceptable.

Step 6. Monitor the security controls.

  • Task 6-1: Determine the security impact of proposed or actual changes to the information system and its environment of operation.

  • Task 6-2: Assess the technical, management, and operational security controls employed within and inherited by the information system in accordance with the organization-defined monitoring strategy.

  • Task 6-3: Conduct remediation actions based on the results of ongoing monitoring activities, assessment of risk, and outstanding items in the plan of action and milestones.

  • Task 6-4: Update the security plan, security assessment report, and plan of action and milestones based on the results of the continuous monitoring process.

  • Task 6-5: Report the security status of the information system (including the effectiveness of security controls employed within and inherited by the system) to the authorizing official and other appropriate organizational officials on an ongoing basis in accordance with the monitoring strategy.

  • Task 6-6: Review the reported security status of the information system (including the effectiveness of security controls employed within and inherited by the system) on an ongoing basis in accordance with the monitoring strategy to determine whether the risk to organizational operations, organizational assets, individuals, other organizations, or the nation remains acceptable.

  • Task 6-7: Implement an information system disposal strategy, when needed, which executes required actions when a system is removed from service.

NIST 800-37 Revision 2 is currently being drafted.

SP 800-39

The purpose of NIST SP 800-39 is to provide guidance for an integrated, organizationwide program for managing information security risk to organizational operations (that is, mission, functions, image, and reputation), organizational assets, individuals, other organizations, and the nation resulting from the operation and use of federal information systems. NIST SP 800-39 defines three tiers in an organization.

Tier 1 is the organization view, which addresses risk from an organizational perspective by establishing and implementing governance structures that are consistent with the strategic goals and objectives of organizations and the requirements defined by federal laws, directives, policies, regulations, standards, and missions/business functions. Tier 2 is the mission/business process view, which designs, develops, and implements mission/business processes that support the missions/business functions defined at Tier 1. Tier 3 is the information systems view, which includes operational systems, systems under development, systems undergoing modification, and systems in some phase of the system development life cycle.

Figure 3-10 shows the risk management process applied across all three tiers identified in NIST SP 800-39.

The risk management process applied across all three tiers is shown.

Figure 3-10 NIST Risk Management Process Applied Across All Three Tiers
Reprinted courtesy of the National Institute of Standards and Technology, U.S. Department of Commerce. Not copyrightable in the United States.

The risk management process involves the following steps:

Step 1. Frame risk.

Step 2. Assess risk.

Step 3. Respond to risk.

Step 4. Monitor risk.

NIST Framework for Improving Critical Infrastructure Cybersecurity
Images

The NIST Framework for Improving Critical Infrastructure Cybersecurity provides a cybersecurity risk framework. The framework is based on five framework core functions:

  • Identify (ID): Develop organizational understanding to manage cybersecurity risk to systems, assets, data, and capabilities.

  • Protect (PR): Develop and implement the appropriate safeguards to ensure delivery of critical infrastructure services.

  • Detect (DE): Develop and implement the appropriate activities to identify the occurrence of a cybersecurity event.

  • Respond (RS): Develop and implement the appropriate activities to take action regarding a detected cybersecurity event.

  • Recover (RC): Develop and implement the appropriate activities to maintain plans for resilience and to restore any capabilities or services that were impaired due to a cybersecurity event.

Within each of these functions, security professionals should define cybersecurity outcomes closely tied to organizational needs and particular activities. Each category is then divided into subcategories that further define specific outcomes of technical and/or management activities. The function and category unique identifiers are shown in Figure 3-11.

NIST cybersecurity framework function and category unique identifiers are shown.

Figure 3-11 NIST Cybersecurity Framework Function and Category Unique Identifiers
Reprinted courtesy of the National Institute of Standards and Technology, U.S. Department of Commerce. Not copyrightable in the United States.

Framework implementation tiers describe the degree to which an organization’s cybersecurity risk management practices exhibit the characteristics defined in the framework. The following four tiers are used:

  • Tier 1: Partial: Risk management practices are not formalized, and risk is managed in an ad hoc and sometimes reactive manner.

  • Tier 2: Risk Informed: Risk management practices are approved by management but may not be established as organizationwide policy.

  • Tier 3: Repeatable: The organization’s risk management practices are formally approved and expressed as policy.

  • Tier 4: Adaptive: The organization adapts its cybersecurity practices based on lessons learned and predictive indicators derived from previous and current cybersecurity activities through a process of continuous improvement.

Finally, a framework profile is the alignment of the functions, categories, and subcategories with the business requirements, risk tolerance, and resources of the organization. A profile enables organizations to establish a roadmap for reducing cybersecurity risk that is well aligned with organizational and sector goals, considers legal/regulatory requirements and industry best practices, and reflects risk management priorities.

The following steps illustrate how an organization could use the framework to create a new cybersecurity program or improve an existing program:

Step 1. Prioritize and scope.

Step 2. Orient.

Step 3. Create a current profile.

Step 4. Conduct a risk assessment.

Step 5. Create a target profile.

Step 6. Determine, analyze, and prioritize gaps.

Step 7. Implement the action plan.

An organization may repeat the steps as needed to continuously assess and improve its cybersecurity.

ISO/IEC 27005:2008
Images

According to ISO/IEC 27005:2008, the risk management process consists of the following steps:

Step 1. Context establishment: Define the risk management’s boundary.

Step 2. Risk analysis (risk identification and estimation phases): Evaluate the risk level.

Step 3. Risk assessment (risk analysis and evaluation phases): Analyze the identified risks and takes into account the objectives of the organization.

Step 4. Risk treatment (risk treatment and risk acceptance phases): Determine how to handle the identified risks.

Step 5. Risk communication: Share information about risk between the decision makers and other stakeholders.

Step 6. Risk monitoring and review: Detect any new risks and maintains the risk management plan.

Figure 3-12 shows the risk management process based on ISO/IEC 27005:2008.

The risk management process based on ISO/IEC 27005:2008.

Figure 3-12 ISO/IEC 27005:2008 Risk Management Process

Open Source Security Testing Methodology Manual (OSSTMM)

The Institute for Security and Open Methodologies (ISECOM) published OSSTMM, which was written by Pete Herzog. This manual covers the different kinds of security tests of physical, human (processes), and communication systems, although it does not cover any specific tools that can be used to perform these tests. It defines five risk categorizations: vulnerability, weakness, concern, exposure, and anomaly. After a risk is detected and verified, it is assigned a risk assessment value.

COSO’s Enterprise Risk Management (ERM) Integrated Framework

COSO broadly defines ERM as “the culture, capabilities and practices integrated with strategy-setting and its execution, that organizations rely on to manage risk in creating, preserving and realizing value.” The ERM framework is presented in the form of a three-dimensional matrix. The matrix includes four categories of objectives across the top: strategic, operations, reporting, and compliance. There are eight components of enterprise risk management. Finally, the organization, its divisions, and its business units are depicted as the third dimension of the matrix for applying the framework. The three-dimensional matrix of COSO’s ERM is shown in Figure 3-13.

COSO’s ERM Integrated Framework is in the form of a cubical block.

Figure 3-13 COSO’s ERM Integrated Framework

Risk Management Standard by the Federation of European Risk Management Associations (FERMA)

FERMA’s Risk Management Standard provides guidelines for managing risk in an organization. Figure 3-14 shows FERMA’s risk management process as detailed in its Risk Management Standard.

FERMA’s risk management process is shown.

Figure 3-14 FERMA’s Risk Management Process

Organizational Governance Components

Security professionals must understand how information security components work together to form a comprehensive security plan. Information security governance components include:

  • Policies

  • Processes

  • Procedures

  • Standards

  • Guidelines

  • Baselines

Policies

A security policy dictates the role of security as provided by senior management and is strategic in nature, meaning it provides the end result of security. Policies are defined in two ways: the level in the organization at which they are enforced and the category to which they are applied. Policies must be general in nature, meaning they are independent of a specific technology or security solution. Policies outline goals but do not give any specific ways to accomplish the stated goals. Each policy must contain an exception area to ensure that management will be able to deal with situations that might require exceptions.

Policies are broad and provide the foundation for development of standards, baselines, guidelines, and procedures, all of which provide the security structure. Administrative, technical, and physical access controls fill in the security and structure to complete the security program.

The policy levels used in information security are organizational security policies, system-specific security policies, and issue-specific security policies. The policy categories used in information security are regulatory security policies, advisory security policies, and informative security policies. The policies are divided as shown in Figure 3-15.

The various levels and categories of security policies are shown.

Figure 3-15 Levels and Categories of Security Policies

Organizational Security Policy

An organizational security policy is the highest-level security policy adopted by an organization. Business goals steer the organizational security policy. An organizational security policy contains general directions and should have the following components:

  • Define overall goals of security policy.

  • Define overall steps and importance of security.

  • Define security framework to meet business goals.

  • State management approval of policy, including support of security goals and principles.

  • Define all relevant terms.

  • Define security roles and responsibilities.

  • Address all relevant laws and regulations.

  • Identify major functional areas.

  • Define compliance requirements and noncompliance consequences.

An organizational security policy must be supported by all stakeholders and should have high visibility for all personnel and should be discussed regularly. In addition, it should be reviewed on a regular basis and revised based on the findings of the regular review. Each version of the policy should be maintained and documented with each new release.

System-Specific Security Policy

A system-specific security policy addresses security for a specific computer, network, technology, or application. This policy type is much more technically focused than an issue-specific security policy. It outlines how to protect the system or technology.

Issue-Specific Security Policy

An issue-specific security policy addresses specific security issues. Issue-specific policies include email privacy policies, virus checking policies, employee termination policies, no expectation of privacy policies, and so on. Issue-specific policies support the organizational security policy.

Policy Categories

Regulatory security policies address specific industry regulations, including mandatory standards. Examples of industries that must consider regulatory security policies include healthcare facilities, public utilities, and financial institutions.

Advisory security policies provide instruction on acceptable and unacceptable activities. In most cases, such a policy is considered to be strongly suggested, not compulsory. This type of policy usually gives examples of possible consequences if users engage in unacceptable activities.

Informative security policies provide information on certain topics and act as an educational tool.

Processes

A process is a series of actions or steps taken in order to achieve a particular end. Organizations define individual processes and their relationships to one another. For example, an organization may define a process for how customers enter online orders, how payments are processed, and how orders are fulfilled after the payments are processed. While each of these processes are separate and include a list of unique tasks that must be completed, they rely on each other for completion. A process lays out how a goal or task is completed. Processes then lead to procedures.

Procedures

Procedures embody all the detailed actions that personnel are required to follow and are the closest to the computers and other devices. Procedures often include step-by-step lists on how policies, processes, standards, and guidelines are implemented.

Standards

Standards describe how policies will be implemented within an organization. They are mandatory actions or rules that are tactical in nature, meaning they provide the steps necessary to achieve security. Just like policies, standards should be regularly reviewed and revised.

Guidelines

Guidelines are recommended actions that are much more flexible than standards, thereby providing allowance for circumstances that can occur. Guidelines provide guidance when standards do not apply.

Baselines

A baseline is a reference point that is defined and captured to be used as a future reference. Although capturing baselines is important, using baselines to assess the security state is just as important. Even the most comprehensive baselines are useless if they are never used.

Capturing a baseline at the appropriate point in time is also important. Baselines should be captured when a system is properly configured and fully updated. When updates occur, new baselines should be captured and compared to the previous baselines. At that time, adopting new baselines based on the most recent data might be necessary.

Enterprise Resilience

The ISO defines enterprise resilience as “the ability of an organization to absorb and adapt in a changing environment to enable it to deliver its objectives and to survive and prosper.” Enterprise resilience encompasses the entire risk management effort of an organization.

The U.S. Computer Emergency Readiness Team (US-CERT) created the Cyber Resilience Review (CRR) assessment to help organizations evaluate their operational resilience and cybersecurity practices. The CRR was created by the Department of Homeland Security (DHS) for the purpose of evaluating the cybersecurity and service continuity practices of critical infrastructure owners and operators. The CRR may be conducted as a self-assessment or as an onsite assessment facilitated by DHS cybersecurity professionals.

The CRR consists of 10 domains. Each domain is composed of a purpose statement, a set of specific goals and associated practice questions unique to the domain, and a standard set of Maturity Indicator Level (MIL) questions. The 10 domains are as follows:

  • Asset management: To identify, document, and manage assets during their life cycle to ensure sustained productivity to support critical services

  • Controls management: To identify, analyze, and manage controls in a critical service’s operating environment

  • Configuration and change management: To establish processes to ensure the integrity of assets, using change control and change control audits

  • Vulnerability management: To identify, analyze, and manage vulnerabilities in a critical service’s operating environment

  • Incident management: To establish processes to identify and analyze events, detect incidents, and determine an organizational response

  • Service continuity management: To ensure the continuity of essential operations of services and their associated assets if a disruption occurs as a result of an incident, a disaster, or another event

  • Risk management: To identify, analyze, and mitigate risks to critical service assets that could adversely affect the operation and delivery of services

  • External dependencies management: To establish processes to manage an appropriate level of controls to ensure the sustainment and protection of services and assets that are dependent on the actions of external entities

  • Training and awareness: To develop skills and promote awareness for people with roles that support the critical service

  • Situational awareness: To actively discover and analyze information related to immediate operational stability and security and to coordinate such information across the enterprise to ensure that all organizational units are performing under a common operating picture

The CRR uses MILs to provide organizations with an approximation of the maturity of their practices in the 10 cybersecurity domains. It uses the following 6 MILs:

  • MIL0—Incomplete: Practices in the domain are not being performed as measured by responses to the relevant CRR questions in the domain.

  • MIL1—Performed: All practices that support the goals in a domain are being performed as measured by responses to the relevant CRR questions.

  • MIL2—Planned: A specific practice in the CRR domain is not only performed but also supported by planning, stakeholders, and relevant standards and guidelines.

  • MIL3—Managed: All practices in a domain are performed, planned, and have in place the basic governance infrastructure to support the process.

  • MIL4—Measured: All practices in a domain are performed, planned, managed, monitored, and controlled.

  • MIL5—Defined: All practices in a domain are performed, planned, managed, measured, and consistent across all constituencies within an organization that have a vested interest in the performance of the practice.

An organization can attain a given MIL only if it has attained all the lower MILs.

The CRR assessment enables an organization to assess its capabilities relative to the NIST Cybersecurity Framework (CSF). The CRR, whether through the self-assessment tool or facilitated session, generates a report as a final product.

Exam Preparation Tasks

As mentioned in the section “How to Use This Book” in the Introduction, you have a couple choices for exam preparation: the exercises here and the practice exams in the Pearson IT Certification test engine.

Review All Key Topics

Review the most important topics in this chapter, noted with the Key Topics icon in the outer margin of the page. Table 3-9 lists these key topics and the page number on which each is found.

Table 3-9 Key Topics for Chapter 3

Key Topic Element

Description

Page Number

Table 3-1

Confidentiality, integrity, and availability potential impact definitions

100

Paragraph

FIPS 199 explanation

101

Table 3-2

Administrative (management) controls

105

Table 3-3

Logical (technical) controls

106

Table 3-4

Physical controls

107

List

Threat actors

123

List

Threat actor evaluation criteria

124

Paragraph

SLE calculation

128

Paragraph

ALE calculation

128

Paragraph

ARO explanation

130

Paragraph

Payback explanation

132

Paragraph

Net present value (NPV) explanation

132

Section

Four basic risk strategies

135

Paragraph

NIST SP 800-30 steps

137

Paragraph

Threat agents

139

List

NIST SP 800-34 Rev. 1 steps

143

List

NIST risk management framework

149

List

NIST SP 800-53 Rev. 4 Control Families

152

List

NIST SP 800-53 Rev. 4 Steps

153

List

NIST Framework for Improving Critical Infrastructure Cybersecurity core functions

160

List

ISO/IEC 27005:2008 risk management process

162

Define Key Terms

Define the following key terms from this chapter and check your answers in the glossary:

access control list (ACL)

administrative control

advisory security policy

annualized loss expectancy (ALE)

annualized rate of occurrence (ARO)

asset

asset value (AV)

availability

baseline

business continuity plan (BCP)

checksum

clandestine

compensative control

confidentiality

continuity of operations plan (COOP)

corrective control

countermeasure

covert

crisis communications plan

critical infrastructure protection (CIP) plan

cyber incident response plan

detective control

deterrent control

digital signature

directive control

disaster recovery plan (DRP)

encryption

exposure factor (EF)

external actor

guideline

hacktivist

hash

hot site

information system contingency plan (ISCP)

informative security policy

inherent risk

integrity

internal actor

issue-specific security policy

likelihood

load balancing

logical control

magnitude

management control

mean time to repair (MTTR)

maximum tolerable downtime (MTD)

mean time between failures (MTBF)

motivation

occupant emergency plan

organizational security policy

overt

physical control

policy

preventive control

procedure

qualitative risk analysis

quantitative risk analysis

recovery control

recovery point objective (RPO)

recovery time objective (RTO)

redundant array of independent disks (RAID)

regulatory security policy

residual risk

risk

risk acceptance

risk avoidance

risk management

risk mitigation

risk transference

security requirements traceability matrix (SRTM)

single loss expectancy (SLE)

standard

steganography

system-specific security policy

technical control

threat

threat agent

vulnerability

work recovery time (WRT)

Review Questions

1. You are analyzing a group of threat agents that includes hardware and software failure, malicious code, and new technologies. Which type of threat agents are you analyzing?

  • human

  • natural

  • environmental

  • technical

2. You have been asked to document the different threats to an internal file server. As part of that documentation, you need to include the monetary impact of each threat occurrence. What should you do?

  • Determine the ARO for each threat occurrence.

  • Determine the ALE for each threat occurrence.

  • Determine the EF for each threat occurrence.

  • Determine the SLE for each threat occurrence.

3. After analyzing the risks to your company’s web server, company management decides to implement different safeguards for each risk. For several risks, management chooses to avoid the risk. What do you need to do for these risks?

  • Determine how much risk is left over after safeguards have been implemented.

  • Terminate the activity that causes the risks or choose an alternative that is not as risky.

  • Pass the risk to a third party.

  • Define the acceptable risk level the organization can tolerate and reduce the risks to that level.

4. You are currently engaged in IT security governance for your organization. You specifically provide instruction on acceptable and unacceptable activities for all personnel. What should you do?

  • Create an advisory security policy that addresses all these issues.

  • Create an NDA that addresses all these issues.

  • Create an informative security policy that addresses all these issues.

  • Create a regulatory security policy and system-specific security policy that address all these issues.

5. A security analyst is using the SCinformation system = [(confidentiality, impact), (integrity, impact), (availability, impact)] formula while performing risk analysis. What will this formula be used for?

  • to calculate quantitative risk

  • to calculate ALE

  • to calculate the aggregate CIA score

  • to calculate SLE

6. Your organization has experienced several security issues in the past year, and management has adopted a plan to periodically assess its information security awareness. You have been asked to lead this program. Which program are you leading?

  • security training

  • continuous monitoring

  • risk mitigation

  • threat identification

7. The chief information security officer (CISO) has asked you to prepare a report for management that includes the overall costs associated with running the organizational risk management process, including insurance premiums, finance costs, administrative costs, and any losses incurred. What are you providing?

  • ROI

  • SLE

  • TCO

  • NPV

8. While performing risk analysis, your team has come up with a list of many risks. Several of the risks are unavoidable, even though you plan to implement some security controls to protect against them. Which type of risk is considered unavoidable?

  • inherent risks

  • residual risks

  • technical risks

  • operational risks

9. A hacker gains access to your organization’s network. During this attack, he is able to change some data and access some design plans that are protected by a U.S. patent. Which security tenets have been violated?

  • confidentiality and availability

  • confidentiality and integrity

  • integrity and availability

  • confidentiality, integrity, and availability

10. An organization has a research server farm with a value of $12,000. The exposure factor for a complete power failure is 10%. The annualized rate of occurrence that this will occur is 5%. What is the ALE for this event?

  • $1,200

  • $12,000

  • $60

  • $600

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
100.24.20.141