Chapter 5
Domain 3: Risk Identification, Monitoring, and Analysis

IT risk management is the primary focus of the concept of IT security in an organization's IT operations. All organizations depend on IT operations and the information systems that are developed from that technology to successfully carry out the mission and business function of the enterprise. Frequently, portable devices such as digital assistants, tablets, laptops and cell phones are being integrated within the IT systems of the organization.

Possible threats challenge the organization from many different directions, many of which require continuous and ongoing threat identification and monitoring processes that determine the application of controls. Organizational IT systems are subject to serious threats or attacks that could have an adverse impact on the availability and integrity of the organization's hardware systems, information storage, and information telecommunications.

Managing security risks to information systems is a complex, multifaceted undertaking. There are levels of responsibilities throughout the organization that are necessary to meet the goals and objectives of risk management. Senior leaders and management provide corporate policies and governance, organizational managers provide planning and implementation to meet the requirements of these policies, and individuals such as security practitioners are charged with ensuring that the controls designed to reduce risk meet operational baselines efficiently and effectively. The System Security Certified Practitioner will always be familiar with the enterprise's policies, as well as the standards, procedures, and guidelines that will ensure due-care protection of its information as well as its hardware assets. Through the use of risk analysis, assets and their vulnerabilities are identified as well as potential threats that may exploit these vulnerabilities. A variety of tools and reporting techniques are used to reduce the impact of threats to the organization.

The systems security certified practitioner will be involved at many levels of risk identification, threat assessment, intrusion discovery, and eventual remediation and restoration activities. It is important to have a thorough understanding of risk, vulnerabilities, and threats that face organizations IT infrastructure. These will be processes and projects in which Systems Security Certified Practitioners will contribute both knowledge and skills.

Understanding the Risk Management Process

Risks may take many forms in an organization, including IT risk, financial risk, operational risk, commercial market risk, operational security risk, and personal exposure to risk. A senior management team of any organization is continuously burdened with the identification and mitigation of risks in any category that may result in damage or harm to assets of the organization. The most common risk management approach used by organizations is to identify, assess, manage, and control potential events or situations. Various processes are used to effectively perform risk management. Information risk management is the process of acknowledging and identifying risks, mitigating risks by the reduction of threats or vulnerabilities through the use of controls, and implementing strategies to maintain an acceptable risk level.

Risks are categorized depending upon the type of risk or the aspect of the business they affect. The top executives must consider every risk to an organization.

Defining Risk

The definition of risk is stated as the probability or likelihood that a certain event or incident may occur that will have an adverse impact on an organization and the achievement of its mission or objectives. Since risk has been defined as a probability of something occurring, it may be represented by a mathematical function.

equation

When information technology risk is defined, we are basically speaking of the unauthorized use, disruption, modification, exfiltration, copying, inspection, or destruction of information or information processing hardware or software assets. Company managers will use various tools such as risk analysis and risk assessments to analyze the potential risks that may impact organizational assets. There is no such thing as a risk-free environment, and there is no way to totally reduce risk to zero. The tools, devices, and techniques IT managers and security practitioner use are referred to as controls to reduce the level of risk. Residual risk is any risk remaining after the implementation of safeguards and controls. For instance, an asset valued at $100,000 may have a $100,000 insurance policy placed upon it. But that $100,000 insurance policy may have a $5,000 deductible, which is the amount an organization pays in the event of loss. Obviously, with a total loss, the insurance company would reimburse the organization $95,000. The residual risk would be the $5,000 deductible.

Risk Management Process

The risk management process is simply a business function that involves identifying, evaluating, and controlling risks (Figure 5.1). As you have seen, risks are prevalent throughout the organization. For example, a commercial market risk may occur when a competitor may lower its prices below what it costs another company to manufacture the same product. A major supplier may have a fire, thus eliminating a supply of raw material to the organization. A hurricane might cause the manufacturing facility to be closed down for an extended period of time. Each of these represents risks to the organization and must be addressed through the risk management process. For instance, in the example of the major supplier experiencing a fire, management would have recognized that nondelivery of production materials would pose a threat to the organization. During a risk analysis process, threats that might affect a major supplier would be identified. In the event such threats came to fruition, causing the supplier to not be able to supply raw materials, plans would be in place to purchase the raw materials through other suppliers. Although this ­scenario is simplistic in nature, it illustrates the process behind risk management.

Flowchart presenting the process of risk management, with three steps in boxes, labeled from left to right, Identify Asset, Determine Threat, and Formulate a Response, and with a big right arrow as background.

Figure 5.1 The process of risk management

The loss of information processing hardware, software, and information is a serious risk to any organization. Information risk management (IRM) is the process whereby risks to IT hardware, software, and information assets are identified and threats and vulnerabilities are reduced to an acceptable level and controls are implemented to maintain that level. At the heart of information risk management are decisions concerning priorities and monetary resources. The question for senior management is what assets to protect and how much to budget to protect it.

Several categories of risks to IT hardware, software, and information assets must be considered. The following are some of the categories that represent threat sources to an IT organization:

  1. Weather Hurricanes, tornadoes, snowstorms, avalanches, floods, tsunamis, dry spells, and freezing temperatures
  2. Utility and Services Disruptions of utilities and services such as water, electrical power, natural gas, telephone system, microwave communication, cellular communication
  3. Human Activities Mistakes, neglect, sabotage, theft, vandalism, hacking, mischief
  4. Business Process Equipment malfunction, supply-chain interruption, quality control ­problem, market pricing
  5. Information Technology Equipment misuse, loss of information, damage or outage of IT equipment
  6. Reputation Theft of customer information, release of trade secrets and proprietary ­processes, release of sensitive and confidential corporate information

The determination of risk requires the knowledge of various attributes that exist in every scenario where there is risk, as detailed in the following sections.

Assets

Assets are resources that an organization uses to fulfill its mission or business objectives. Resource in this context may be very broad and refer to any asset including time, people, money and physical items. When defining risk, we are considering the ­possibility of loss of a resource. Assets and resources always represent value to the organization. Resources may be grouped into two major categories: tangible assets and intangible assets.

  1. Tangible Assets A tangible asset represents a physical item that may be touched. The ­following assets are considered tangible assets:
    • Buildings and structures
    • Tools and equipment
    • Raw goods and finished goods
    • Furniture and business equipment
    • IT hardware and software
    • Environmental systems
    • Documentation
  2. Intangible Assets An intangible asset represents an item is not physical in nature. The ­following assets are intangible assets:
    • Trade secrets, methodology, and intellectual property
    • Proprietary or sensitive information
    • Customer or supplier databases
    • Contracts and agreements
    • Policies, plans, and records
    • Information stored by any means
    • People

Ranking Assets

Assets may be ranked or prioritized within an organization. During asset analysis, a variety of scores will be utilized to rank assets. Of course, the people within an organization are ranked as the highest value asset and must be protected at all costs. The second highest asset ranking will be critical assets. If critical assets are lost or compromised, the viability of the organization is jeopardized. Where hardware items, facilities, and products available from third-party vendors are ranked lower, critical assets in any organization generally include information assets such as customer/vendor data, trade secrets, proprietary information, intellectual property, and information generated by or critical to the ongoing operation of the organization.

Threats

Threats are defined as any incident or event that represents the probability to harm an organization. Harm to an organization's IT infrastructure may be in the form of authorized access, data manipulation or change, denial of access, destruction of assets and an authorized information access and release. Threats to an organization are caused by a number of different threat actions. Common sources of threats are natural, human, or environmental. While it is easy to assume that a hurricane is a threat, it is in fact just a threat source. What this means, in essence, is that the hurricane is the actor that causes the threats. The actual threat caused by the hurricane is referred to as the threat action or a threat agent. A threat vector is the path which a threat takes to cause an action.

While threat sources were identified in the section “Risk Management Process” earlier in this chapter, the actual threat that they pose may be individually identified. For example, a hurricane is a threat action. This means that a variety of threats to the organization may be posed during a hurricane. Each of these threats will follow a specific threat vector to harm the organization:

  • The roof collapsing or windows and doors blowing out due to excessive wind.
  • The facility flooding because of high water or the penetration of rain, floods, or tide water.
  • The disruption of power or communications due to excessive wind.
  • The destruction of physical assets such as computer hardware, network cabling, and infrastructure due to fire caused by the hurricane.
  • The disruption of people. For example, people may not be available, access may be denied, the environment might be too dangerous, or communication access to information assets may be impossible regardless of where the people are located.

So while we may look at a number of different threat actions as being potential threats, in fact many of the threats are the same; they are just caused by different threat actions. For example, a fire, a flood, a wind storm, or a catastrophic mud slide may all cause the destruction of physical assets, such as computer hardware, networking, and infrastructure. As a security professional, you must plan for the resultant action of a threat regardless of the actual cause. The reality is that if the power goes out, the appropriate response must be determined regardless of why the power went out. An event triage is a process during which damage is assessed and restoration priorities are determined. During an event triage, you may predict that the power will be out for two hours, for example, or for four days; your prediction will influence your decisions for the proper recovery response. In any event, the power is out regardless of whether it was caused by a hurricane or forest fire.

The actions of humans pose the greatest risk to any organization. While hurricanes, tornadoes, and forest fires may be spectacular during the event and create great harm to the organization, the frequency and complexity of threats created by people far outweigh any other threat to the organization. Threats posed by the human element may be grouped into two categories:

  1. Insider Threats Insiders are individuals with internal access to the organization's information assets or networks. They may be employees, contractors, temporary workers, or persons with specific access rights, such as customers, clients, or suppliers. Insiders pose the largest single threat to an organization because insiders are supposedly trusted ­individuals. Insider threats may take the form of sabotage, fraud, theft of assets, disruption, or espionage.
  2. External ThreatsExternal threats may be may be posed by people who are external to the organization. Because these individuals external, they must work harder at penetrating the ­organization's security controls and defenses. This means that they do not have access to passwords or proprietary information as would an employee or contractor who has access to the network. These ­individuals may be hackers, hacktivists, script kiddies, disgruntled employees, industrial spies, or international terrorists.

Vulnerability

Vulnerability is any flaw or weakness that may be attacked or exploited by a threat. A vulnerability may be characterized by either an intentional flaw or weakness placed by an individual or an unintentional flaw such as a mistake in manufacturing. For example, a programmer might intentionally create a back door in a software application. If not detected during a code review, the back door could provide access to the programmer or other people with malicious intent.

Any information system, application, controls, or information assets could be exploited by a threat or a threat agent. For instance, an unpatched or unsupported application or operating system on a host machine or server could create a security vulnerability. From a corporate risk perspective, the security of the entire IT security environment must take into consideration the physical aspects of the building and structure, HVAC, fire suppression, physical lockdown, and access restrictions as well as vulnerabilities to the electrical power, communications, and network infrastructure.

Vulnerabilities may be grouped into two different categories:

  1. Intentional Vulnerability An intentional vulnerability is a willful action that places an asset in jeopardy. An example of unintentional vulnerability is creating access where none existed before or purposefully instigating a situation where assets are exposed to a threat. Intentionally placing malware on a system that creates a back door, allowing an outside visitor access to sensitive hardware, and escalating a user's rights so they can access sensitive information are all examples of creating an intentional vulnerability.
  2. Unintentional Vulnerability An unintentional vulnerability takes the form of a manufacturing defect, programming error, or design flaw in either hardware or software. An example of unintentional vulnerability is a “zero-day attack” that takes advantage of a previously unknown vulnerability. Unintentional vulnerability may also be caused through negligence, lack of patching and maintenance, lack of effective policies and procedures. Accidents happen. With appropriate planning, training, and other mitigation techniques, the possibility of accidents can be reduced. But still, accidents can happen. Insurance underwrites offer uninsured motorist insurance policies to mitigate the effects when an accident occurs and the other driver lacks insurance; your insurance policy will cover your loss even though the lack of insurance on the other driver's part may be intentional in nature. Your vulnerability to having an unintentional accident and the other driver not having insurance represents the reason for your insurance policy.

Controls

Controls are the mechanisms utilized during the risk management process to reduce the ability of a threat to exploit a vulnerability, which would result in harm to the organization. Controls may also be used to reduce the level of a vulnerability. For example, a control that might be used to keep a thief (threat) from exploiting the door (vulnerability) on a backyard work shed might be a heavy lock (control) on the shed door. This effort would reduce or mitigate the vulnerability of the door of the shed. The methods used to reduce or mitigate a threat, on the other hand, might include a heavy lock on the backyard gate, motion-sensing lighting in the backyard, and a guard dog. Therefore, by reducing a vulnerability and reducing the threat, the result is reduction of the overall risk.

There are three categories of controls:

  1. Administrative Controls Administrative controls are put in place to enforce a policy.
  2. Technical Controls Technical controls, also referred to as logical controls, are controls placed on the network, computer system, or communication network. Technical controls take the form of hardware, software, or firmware.
  3. Physical Controls Physical controls are controls used to protect physical assets, such as fences, doors, locks, signs, lights, and other physical safeguards.
Safeguards

The terms safeguards and controls are used interchangeably and describe any device, procedure, or action that provides a degree of protection to an asset. Controls are a general category of procedures, mechanisms, and techniques that make up the layered defense model; safeguards are generally described in terms of preventative activities or devices put in place as a result of a risk analysis. A safeguard is generally the use of a control mechanism of some type, the concept of defense in depth provides for the use of multiple controls in a series. Second- or third-tier controls are sometimes referred to as safeguards because they may mitigate a threat that manages to pass through the primary control mechanism.

Compensating Controls

Every control can have a built-in or inherent weakness. A compensating control is a device, procedure, or mechanism that addresses the inherent weakness of the primary control. In most cases, a compensating control addresses conditions or situations that the primary control misses.

Countermeasures

During a risk analysis, various threats as well as vulnerabilities are identified. Countermeasures generally describe specific activities, procedures, or devices which are put in place to mitigate an identified risk or vulnerability that has been identified during the risk analysis process. Examples of countermeasures include:

  • A specific information systems security policy
  • Specific topics covered during security education, awareness, and training
  • Logical access controls for privacy requirements
  • Proxy servers, HIPS, NIPS, air gap techniques
  • Targeted firewall IP exclusion rules
  • Honeypots

Exposure

Exposure is defined as the estimated percentage of loss should a specific threat exploit the vulnerability of an asset. For example, if a server is valued at $10,000 and each time it is attacked $5,000 is required to rebuild the system, then the exposure factor is 50 percent. Exposure is always expressed as a percentage that when multiplied by the asset value results in the amount of loss during each attack. For instance, the example scenario above restated as a mathematical equation would appear as: $10,000 × .50 = $5,000

Risk Analysis

The risk analysis process is an analytical method of identifying both threats and asset vulnerabilities and determining the likelihood and impact should the threat event occur and exploit the identified vulnerability.

Risk Impact

Understanding risk impact is an important factor in risk analysis and risk assessment ­programs. The amount of impact or damage a threat may cause to an asset may determine the risk level to be dealt with. It may be determined that although a risk exists, the value or importance of the asset is such that it may not be worth protecting or that the cost to protect it outweighs the value.

Risk Management Frameworks and Guidance for Managing Risks

Risk management requires the knowledge of best practices, industry standards, and a structured risk analysis method whereby assets and threats may be classified to determine the efforts required to reduce the risk to the organization. Through the years a structured series of ­standards and methodology has evolved in the form of frameworks.

Frameworks have originated through interaction with industry groups, consultants, and a variety of working committees managed by a standards organizations. Three frameworks have become widely utilized throughout IT security and are discussed in the following sections.

ISO/IEC 27005

The ISO/IEC 27000 series is a group of standards that offers guidance to IT security management organizations. The International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) have published an extensive series of best practices, recommendations, and guidelines. The ISO/IEC refers to its family of standards as an information security management system (ISMS). The most popular of the standards in the series is ISO/IEC 27002, which is a security code of practice and guidelines for IT security management. ISO/IEC 27005 offers a framework based upon a broad scope of various factors within the organization. This type of framework allows each organization to address the risks based upon their own ISMS. This framework covers such areas as the following:

  • Information security risk assessment
  • Information security risk treatment
  • Information security risk acceptance
  • Information security risk communication
  • Information security risk monitoring and review

NIST Special Publication 800-37 Revision 1

While the ISO/IEC 27000 series has its foundations in establishing a structure of practices within an organization and then certifying and accrediting the organization for meeting established benchmarks, NIST Special Publication 800-37 Revision 1, “Guide for Applying the Risk Management Framework to Federal Information Systems,” was created to guide IT organizations within the U.S. federal government with a more practical approach to risk management. The National Institute of Standards and Technology (NIST) had its origination as the National Bureau of Standards and is a nonregulatory agency of the U.S. Department of Commerce. NIST is responsible for developing information security standards and guidelines, including minimum requirements for federal information systems. Although the news bulletins and publications originated as guidelines for federal agencies, they are widely applied throughout private and public businesses and organizations.

There are a variety of terms associated with NIST publications:

  1. FISMA The National Institute of Standards and Technology was given statutory responsibilities under the Federal Information Security Management Act (FISMA). Under the law, NIST is responsible for developing information security standards and guidelines. The determination as to which standards to mandate upon federal information systems under the law has been given to the secretary of commerce.
  2. FIPSFederal Information Processing Standards (FIPS) are standards approved by the ­secretary of commerce as compulsory and binding standards for federal agencies.
  3. Special PublicationsSpecial Publications (SP) are documents issued by NIST with ­recommendations and guidance for federal agencies. Only select Special Publications are mandated for compliance by federal agencies.

The risk management framework (RMF) that is detailed in NIST SP 800-37 Revision 1 offers a six-step process for implementing information security and risk management activities into a cohesive system development life cycle. The risk management methodology described by NIST and this Special Publication is composed of the following steps:

  1. Step 1: Categorize The information system is examined to determine a category for the system. During this process, the information that the system processes, stores, and transmits is evaluated. This evaluation is used to determine the asset value and potential risks to the system.
  2. Step 2: Select Baseline security controls are selected based on the category of the ­system. For example, a system that processes classified data must include a selection of controls that mitigate risks for federal information systems in that category. The baseline selection may then be ­augmented with additional controls, or existing controls may be modified or customized based on the needs of the organization.
  3. Step 3: Implement In this step the selected security controls are installed and properly initiated throughout the system. The controls must be documented to show the implementation within the system and what category of risks they mitigate.
  4. Step 4: Assess An assessment process is utilized to determine if the controls are installed and set up correctly, operating effectively, and meeting the risk mitigation requirements as established by the risk management plan for the system.
  5. Step 5: Authorize Authorization occurs when an acceptable level of risk is achieved based upon the implementation of controls.
  6. Step 6: Monitor This step involves the ongoing assessment of the baseline operation of a control and its risk mitigation effectiveness. This process also includes a change management process that documents not only changes to the control and the resulting impact analysis but also environmental changes throughout the system. Management reports also produces result of information system monitoring.

Simply put, a risk management framework is a continuous methodology of categorizing the system based upon a number of criteria and then implementing and monitoring various risk mitigation controls (Figure 5.2). This is referred to as a system security life cycle approach.

Image described by caption and surrounding text.

Figure 5.2 NIST SP 800-37 Revision 1 risk management framework

The categorization of the system can be quite extensive, requiring the determination of the system architecture.

  1. System Architecture This may include the business processes, mission, architectural type or model, and boundaries between the system and other related systems.
  2. Input Constraints This includes all of the items that must be considered, such as laws, policy, goals and objectives, availability, costs, and other input.

NIST Special Publication 800-39

NIST Special Publication 800-39, “Managing Information Security Risk: Organization, Mission, and Information System View” is a NIST document that concerns security risk in an IT environment. It was authored by the Information Technology Laboratory (ITL) at the National Institute of Standards and Technology, which provides technical leadership to the nation's measurement and standards infrastructure. ITL should not be confused with ITIL, which is the Information Technology Infrastructure Library, a six-book series of best practices for the IT services industry. ITL provides research, guidelines, and outreach efforts in information systems security for industry, government, and academic organizations.

NIST Special Publication 800-39 contains these major topics:

  • Components of Risk Management
  • Multi-tiered Risk Management
  • Tier 1 – Organizational View
  • Tier 2 – Mission/Business Process View
  • Tier 3 – Information Systems
  • Trust and Trustworthiness
  • Organizational Culture
  • Relationship among Key Risk Concepts

This publication offers a very good starting point to understanding risk and IT security.

Risk Analysis and Risk Assessment

Risk analysis and risk assessment define various methods for understanding risk within an organization. To be effective, risk management should be a culture and process that is fully integrated throughout the organization. Determining asset value, deciding upon risk mitigation controls, and eventually monitoring and assessing the value of the controls in an everyday environment.

There are two processes that an organization uses when considering risk. A risk assessment is an analytical approach usually employing facts and costs, while a risk assessment may provide a more detailed approach utilizing a much broader scope of the information such as impact analysis and information provided by subject matter experts.

Risk Analysis

Risk analysis is the method by which we identify and analyze risk. Risk analysis is performed by identifying potential threats and vulnerabilities to arrive at a risk determination. The application of numerical analytical techniques is employed to determine asset value and thereby arrive at the cost of controls required to reduce or mitigate the overall risk associated with the asset. There are two general methodologies used to analyze risk: quantitative and qualitative.

Quantitative Risk Analysis

The quantitative risk analysis process involves accumulating various facts and figures about an asset. These facts and figures may include information such as original cost, total cost of ownership, replacement cost, or other monetary amounts. The organization then utilizes this information to determine the total cost that should be budgeted for controls that are used to reduce risk to an acceptable level. The term that describes quantitative risk analysis is Formal Risk Modeling. Here are some examples of establishing asset value:

  1. Example 1 An e-commerce server produces $10,000 in revenue per hour. It requires four hours to rebuild after a 100 percent loss.
  2. Example 2 A customer database is valued at $200,000. In the event of an attack, it will require six hours to restore the database from a backup location.
  3. Example 3 A communications closet contains two servers, three routers, and two switches with a combined value of $16,000. In the event of a localized fire, all items may be lost.

The example figures cited here may have been determined based upon the cost to replace the items, the salaries of technical individuals, and possibly an estimated cost of lost ­revenue due to downtime, among other estimated costs.

The following costs or activities should be considered when values are assigned to assets:

  • Cost to replace the asset if lost
  • Acquisition cost, including freight and purchasing costs
  • The effect or cost on other assets
  • Labor costs to replace the asset
  • Indirect labor costs, such as salaries, contractors
  • Cost of development
  • Liability costs such as the potential damage to other devices
  • Liability costs such as damage to reputation
  • Total value of the asset to the organization that would be lost if the asset was attacked

Quantitative Risk Analysis Formulas and Impact Analysis

Asset valuation determined by various analytical methodologies, can be used to determine the impact of the loss of an asset to the organization. There are several variables that can be used to determine the impact of a loss as a cost to the organization. It might be imagined, not every attack results in a 100 percent loss and that not every attack is successful. Therefore, when using a quantitative risk analysis amount to consider might include, for example, losing 60 percent of an asset or that a successful attack may happen only two times a year.

The ultimate question we are seeking an answer to is how much we should spend on a countermeasure. The following are the calculations that will assist us in answering this question:

  1. Single Loss Expectancy (SLE)

    SLE is the cost (in dollars) that can be lost if a risk event happens. A single loss is of course a one-time loss. But, as mentioned, not every event results in a total loss. So a factor must be used in the equation to represent the amount of loss we expect. So, if

    equation
    equation

    the equation would appear as follows:

    equation

    Or

    equation

    In this equation, if an asset was worth $10,000 and we expect it to lose only half of its value with any given risk event, the equation would be as follows:

    equation
  2. Annualized Lost Expectancy (ALE)

    The ALE is the total cost (in dollars) for all of the SLEs occurring during the year. The number of times we anticipate the risk event will happen during the year will be represented by annualized rate of occurrence (ARO). For instance, if an event happened once a year, ARO = 1; if an event happened twice a year, ARO = 2. So, the equation would appear as follows:

    equation

    In this equation, if the SLE is $5,000 and we expect two risk events to happen in a year, the equation would be as follows:

    equation

    or

    equation
  3. Annualized Rate of Occurrence

    The annualized rate of occurrence, or ARO, is always written as a probability from 0.0 to 1.0, where 1.0 or 100 percent represents one annual occurrence. If we expect two annual occurrences, ARO would equal 2.0, and if we expected an event to happen every two years, the ARO would equal .5.

    Normally, the ARO is derived based upon historical or empirical knowledge. For instance, the frequency of hurricanes or floods in a geographic area is recorded in almanacs and ­government weather services. On occasion, the ARO may the derived based only on past personal experience. For instance, the hacker was able to bypass the firewall four times during the last year. This will result in ARO of 4.0.

  4. Exposure Factor (EF) An exposure factor (EF) refers to the harm or amount of loss that might be experienced by an asset during a risk event. The EF is always written as a percentage or probability and may be expressed in either of the following manners:
    equation
    equation

    This would represent catastrophic or complete 100 percent loss of the asset.

    equation
    equation

    This EF would represent a 50 percent loss of the asset.

Qualitative Risk Analysis

Qualitative risk analysis is based upon the intuition and individual knowledge of the organization's subject matter experts as opposed to the facts and figures of quantitative risk analysis. Qualitative is a subjective valuation system in which asset value is determined based on other factors rather than accounting costs. During the qualitative analysis, variables such as customer flight, cost to rebuild goodwill, estimated loss of potential revenue, bad publicity or press, and the loss of the ability of staff members to maintain productivity.

In qualitative risk analysis, dollar figures may be difficult to assign due to the subjective result. For instance, it is difficult to place an exact dollar figure on damage to goodwill. In many cases, the results of qualitative analysis are addressed in terms of high, medium, and low or on a scale from 0 to 5.

Analysis Information-Gathering Techniques

On many occasions, it will be the responsibility of the security practitioner to participate in the risk analysis process. Information may be gathered under different subject information categories that will be used during the analysis process.

  1. Determine a list of assets. Sometimes it is a major undertaking to locate hardware in racks, closets, and ceiling spaces or to identify software assets such as purchased applications, licenses, databases, proprietary information, and intellectual property stored away on servers and user workstations.
  2. Determine potential threats to an asset. This activity may involve both historical data and expert opinions.
  3. Determine annual rate of occurrence. This activity may involve log data and historical information.
  4. Determine how much of an asset was lost based upon prior attacks. This is attempting to determine the exposure factor based upon historical information.

This information is required for accurate calculations during risk analysis. Much of it comes from accounting data sources and historical data, while other information might be available from subject matter experts (SMEs) Subject matter experts such as database administrators, network managers, and administrators as well as technical staff may supply knowledge concerning the performance of an asset. Another source of data used in risk calculations may be supplied by department managers, executives, or general users based upon recollections or memory of loss occurrences.

The following techniques can be used in gathering information relevant to an asset within its operational boundary or knowledge area.

  1. Questionnaire Risk assessment personnel can develop questionnaires. Different types of questionnaires can return relevant information. For instance, a questionnaire may ask for identification of all hardware and software items within a department. A separate questionnaire may ask the number of times an asset was unavailable for service. To avoid confusion, a clear nomenclature should be established describing the asset by name or inventory asset number. It is suggested that questionnaires be used during both onsite and in-person visits.
  2. Onsite Interviews Onsite interviews might be deemed much more effective than attempting to retrieve questionnaires that were previously sent to individuals. It may, in fact, be easier and faster to discuss information in person to be included on a questionnaire. Onsite interviews often allow the interviewer to observe the asset and determine operation and asset security.
  3. Document Review During a document review, all things pertinent to an asset are examined, including system documentation, directives, policies, user guides, and an acquisition document. Security-related documentation such as spot reports, risk assessment reports, system test results, system security plans, and system security policies might be examined. Important documents also include prior impact analyses or asset criticality assessments.
  4. Scanning Tools A wide variety of scanning tools are available that scanned for potential vulnerabilities on both networks and host devices. Scans may be both active and passive. Active scans are scams that are initiated for a particular purpose such as identifying weak passwords, open ports, or vulnerabilities in applications. Passive scans may be performed continuously as actions occur on a regular basis. For example, passive scans may review individual passwords as they are created by users on a daily basis.

Risk Assessments

Risk assessments are the primary method used during effective risk management to implement risk reduction strategies. Because risk management is a continuous process throughout the System Development Lifecycle (SDLC), it is a process that can be used to identify, assess, and classify threats against the asset and determine the optimal mitigation technique or control to reduce risk. During this process, many operating parameters may be monitored and adjusted with the appropriate reports generated to facilitate management decision making.

Risk assessments build on risk analysis and incorporate the identification of specific risks, the likelihood of occurrence, the impact, and recommendations for controls. Even in small organizations, risk analysis and risk assessments require a substantial commitment of funds, personnel, and time. As might be expected, large organizations require teams to continuously perform risk assessments. In any organization, cost, time, and ease of use are primary concerns. Therefore, a proven framework may be adopted to provide efficient risk assessments.

NIST Special Publication 800-30 Revision 1, “Guide for Conducting Risk Assessments,” offers guidance for conducting risk assessments of federal information systems and organizations. The NIST Special Publication 800-30 Revision 1 publication has been adopted by a large number of organizations worldwide and forms the foundation of their risk management strategy.

The Special Publication outlines the risk assessment component of risk management and provides a step-by-step process on how to prepare for risk assessments, how to conduct risk assessments, how to communicate risk assessments to key organizational personnel, and how to maintain the risk assessments over time.

In the original version of NIST Special Publication 800-30, which is now retired, a nine-step risk assessment process was outlined in detail. It was designed to be similar to a project plan, using typical project management concepts, and each of the nine steps included an input, a process method, and an output. Although effective and comprehensive, it proved to be very bulky and detailed when applied to hundreds of assets. There was a requirement for simplification featuring both speed and adaptability.

In the most recent version, NIST Special Publication 800-30 Revision 1, the risk assessment process has been distilled down to four primary steps. All of the original nine steps are still included in the process. Figure 5.3 illustrates the four basic steps in the risk assessment process and the specific tasks for conducting the assessment.

  1. Step 1: Preparing for the Risk Assessment The first step in the risk assessment process is to prepare for an assessment. The objective of this step is to provide a base framework of information concerning the risk goals of the organization, assessment methodologies to be used, and procedures for selecting risk factors and outlining various requirements, such as policies and regulations that impact the risk assessment. In preparation, there are five ­general categories of information that should be determined:
    • Identify the purpose of the assessment.
    • Identify the scope of the assessment.
    • Identify the assumptions and constraints associated with the assessment.
    • Identify the event information to be used as an input to the assessment.
    • Identify the risk model and analytical approaches.
  2. Step 2: Conducting the Risk Assessment The second step in the risk assessment process is to conduct the assessment. The objective of this step is to come up with a list of risk information that includes resources and events and identifies vulnerabilities and the possible likelihood that a threat may exploit them. Organizations may apply their own criteria to prioritize the threats and vulnerabilities they wish to mitigate. In practice, keeping within available resources, many organizations generalize threat sources, events, and vulnerabilities only as necessary to accomplish the risk assessment objectives. Conducting risk assessments includes the following specific tasks:
    • Identify threat sources that are relevant to the organization.
    • Identify threat events that could be produced by those sources.
    • Identify vulnerabilities within the organization that could be exploited by a threat sources.
    • Determine the likelihood that an identified threat source would initiate specific threat events and the likelihood that a threat event would be successful.
    • Determine the adverse impact to the organizational operations and assets as a result of the exploitation of a vulnerability by a threat.
    • Determine the risk information based upon the combination of the likelihood of a threat exploitation of a vulnerability, the impact of such an exploitation of the organization, and any uncertainties associated with the risk determination.
  3. Step 3: Communicating and Sharing Risk Assessment Information Decision-makers across the organization require the appropriate risk-related information to guide risk decisions. The third step in the risk assessment process is to communicate the results of risk assessments with managers and other individuals. Sharing information consists of the ­following specific steps:
    • Communicating the risk assessment results.
    • Sharing detailed information developed during the execution of the risk assessment to enable to make accurate management decisions concerning risk mitigation.
  4. Step 4: Maintaining the Assessment In all organizations and enterprises, risk assessment, asset valuation, and threat and vulnerability identification is an ongoing process. The acquisition of new networking products, the requirements of new controls to mitigate risk, and changes to the system over time revealed by risk monitoring techniques, all contribute to changes to the overall risk landscape. The objective of this step is to keep the risk assessment current by noting and logging changes in risk knowledge as it occurs. This monitoring provides an organization with the ability to determine the effectiveness of risk controls, identify risk impacting changes, and verify that the systems and controls operate within established guidelines and baselines. In essence, the maintenance of the risk assessment verifies the compliance with the organization's risk strategy. Maintaining risk assessments includes the following specific tasks:
    • Monitor risk factors identified in risk assessments on an ongoing basis.
    • Update the assessment when new products or services are acquired, substantial organizational changes such as mergers and acquisitions occur, and the regulatory environment changes.
Image described by caption and surrounding text.

Figure 5.3 The four risk assessment process steps from the NIST SP 800-37 Revision 1 risk management framework

Managing Risks

As part of the overall risk management plan, organizations analyze and assess risks. During this process, assets are identified, potential threats are determined, and vulnerabilities are evaluated to determine the probability that a threat will exploit a vulnerability and therefore cause harm to the organization.

The next step in the process of managing risks is to formulate a risk treatment plan. A treatment, by definition, is a strategy by which risks are reduced by mitigating threats and reducing the likelihood that a vulnerability may be exploited. The treatment plan is a list of procedures, devices, controls, or steps that may be taken in an effort to reduce risk or minimize the impact once a threat event has taken place.

Treatment Plan

The treatment plan details how the organization plans to respond to potential risks. It ­outlines how risks are managed regardless of whether they are low, high, or acceptable risks and outlines preferred strategies for dealing with identified risks. Sometimes treatment plans are referred to as risk assessment plans, but in actuality, the treatment plan is the result of an assessment plan. The primary purpose of the risk treatment plan is to determine precisely who is responsible for the implementation of controls in what time frame and with what budget. It may also detail the response to an event or incident.

Risk Treatment

Once an organization identifies risks, it must choose from among several different strategies to deal with the risk. Risk treatment involves identifying a range of options for mitigating techniques that may be used to reduce risk. Risk treatment also refers to the overall process of prioritizing risks, evaluating various mitigation options by weighing their benefits and costs, preparing and implementing a risk reduction plan, and then monitoring the mitigation process.

The following risk response options are available for the treatment of risks:

Acceptance

  1. When an organization acknowledges a risk and makes a conscious decision to just live with it, the organization is demonstrating risk acceptance. In the event of a total loss, the organization is willing to accept the cost of replacement. The decision to accept a risk may be the result of various situations or considerations:
    1. Cost The cost of a control can outweigh the damage if a threat situation is realized.
    2. Limited Damage Very little damage might occur if a threat situation is realized.
    3. Residual RiskResidual risk is risk remaining after a control has been put in place. For instance, an insurance policy may cover 100 percent loss of a $100,000 house and have a $2,000 deductible clause. In the event of a total loss, the residual risk would be $2,000.
  2. Transference When the responsibility for the payment of loss is placed on a third party, it is called risk transference. In most instances, it would involve an insurance company. Transferring a risk may also involve an outside source that handles the risk and is responsible for various activities such as investigation, litigation, asset recovery, and claim processing. It's important to note that transferring risk does not mean the risk is going away. It just means that somebody other than the organization is responsible for payment. It is important to note that the ultimate responsibility still remains with the asset owner.
  3. Reduction Risk reduction or mediation is the process whereby a control is put in place to reduce risk. Risks are always present, and no risk can be reduced to zero. Therefore, only the application of various controls will effectively reduce a risk to an acceptable level. As you have seen, controls can take many forms.

    The effect of risk may be addressed by reducing the possibility that a threat will exploit a vulnerability prior to the event being triggered, or it may be reduced through incident response activities that limit continuing damage after an event is triggered. Prior to an event, appropriate controls may be used to reduce the likelihood of an event being triggered. After an event, the appropriate testing taken by a damage control team or an incident response process can limit damage caused by the event.

  4. Avoiding RiskRisk avoidance is to eliminate a risk situation. For instance, if you never climbed a ladder, you would never fall off. To avoid risks, you may identify them from prior experience and analyze what steps can be taken to eliminate a risk situation. There's a fine line between risk reduction and avoiding risk. For example, if you never wanted to have an auto accident on Elm Street, you could avoid the risk by always driving down Main Street. This helps to avoid the risk of having an accident on Elm Street, but in fact the risk of an accident still exists.

Risk Treatment Schedule

The risk treatment schedule documents the plan for implementing preferred risk mitigation strategies for dealing with identified risks. A risk treatment schedule is an output of the risk assessment phase. Risks have been identified, and various controls have been selected to reduce risk. The risk treatment schedule is a listing of risks in order of priorities. Figure 5.4 illustrates a typical risk treatment schedule.

Image described by caption and surrounding text.

Figure 5.4 Typical risk treatment schedule

Although each risk treatment plan can be customized specifically for an organization, at a minimum the plan should include the following sections:

  1. Risk Identification Number For example, identified risks can be sequentially numbered in a hierarchy format such as 1.0, 2.0, and 3.0 as the main risks, with 1.1 as a sub-risk and 1.1.1 as an underlying risk under this risk.
  2. Name of Risk Risks are listed in order of priority.
  3. Risk Treatment Technique List techniques such as acceptance, avoidance, reduction, and transference.
  4. Selected Control Include a list of controls selected to reduce the risk.
  5. Risk Rating after Treatment Use high, medium, and low or a 1-to-10 risk rating scale.
  6. Cost Benefit Analysis This is an optional analysis of the cost benefit of the control that mitigates risk.
  7. Person Responsible for Implementation of the Control Usually listed by department and role.
  8. Timetable for Implementation This is a date when the control will be fully operational.
  9. Control Monitoring Method This is a description of how to monitor the control after it is in place. Baselines and standards to be used should be specified.
  10. Incident Response A brief description of the intended response to an incident. This field may also refer to an incident response plan document.

Although the risk treatment schedule takes the form of a spreadsheet template, many of the fields are brief descriptions. The risk treatment plan should include full documentation supporting the identification of a risk during a risk assessment activity, an impact ­analysis, a control selection criteria, control baselines and standards, and an incident response document.

Risk Register

A risk register is a primary document used to maintain a record of risks. It is a direct ­output of the risk assessment process. A risk register includes a detailed description of each risk that is listed. Although the risk register may appear to overlap the risk treatment schedule, the two documents serve different purposes. The risk treatment plan should include complete documentation concerning the identification of threats and asset vulnerabilities. It also prioritizes risks so they can be addressed with available resources. The risk register might include the ­following fields or columns:

  • Risk Identification Number
  • Name of Risk
  • Name or Title of Team Member Responsible for the Risk
  • Initial Date Reported
  • Last Updated
  • Impact Rating
  • Impact Description
  • Probability of Occurrence
  • Timeline for Mitigation
  • Completed Actions
  • Future Actions
  • Risk Status

Other types of risk treatments exist that do not fall into the preceding categories. It is nearly impossible to predict all possible risks. In this case, the organization may acknowledge that other risks exist. If an unknown risk event is triggered and identified, it may be placed into an existing general response category. This type of response is called control and investigation. The damage is controlled by a general preplanned process, and then an investigation is launched as to the source of the threat, the scope of the attack, and the amount of damage caused.

For example, a network intrusion prevention system (IPS) could be attacked through an exploit of a previously unknown flaw in its operating system. This type of attack is called the zero-day attack because it is previously unknown to the manufacturer, and therefore no patches or mitigation techniques are known. The zero-day attack can exploit a vulnerability, allowing an attacker to place a Trojan on the network. The Trojan can enable its ­malware component to open a back door to a database application. Although the organization may not have listed this device as a possible risk in its risk register, it still may have taken action. The attack might immediately have been classified as an Internet intrusion, and an incident response team might immediately be called in to control any damage. After an event, an investigation should be undertaken to identify the specific attack.

Risk Visibility and Reporting

Most organizations today have an insufficient understanding of risk and an inability to identify and prevent risks. Many managers have insufficient knowledge of the potential of threats and vulnerabilities within the organization, let alone have the resources or a plan to address the risk event once it happens.

Organizations with limited risk visibility are continuously in a reactive state. Most organizations have insufficient understanding of how risk affects daily operations. The culture of many organizational groups is engaged almost exclusively with accomplishing daily tasks, and most managers are rarely able to comprehend the big picture or gain visibility into a risk landscape. Executives and managers at all levels are not risk aware even within their own workspace. Should a risk event occur, the focus is placed on fixing problems and ­putting out fires rather than on risk prevention.

Enterprise Risk Management

Enterprise risk management (ERM) is a program designed to change the risk culture from reactive to proactive and accurately forecast and mitigate the risk on any key programs. For ERM to be successful, it must be undertaken at an organizational level. This means that it must become part of the overall culture of the organization in order to identify and reduce risks on a proactive basis. To accomplish this task, senior management must decide to make the risk management program more visible throughout the organization.

Without access to comprehensive risk data, an organization may find it difficult to ­identify its outstanding risks and their probability of occurrence, causing forecasts to be less accurate. There may also be a disconnect between the information used by employees in the field and the information used by executive decision-makers.

An introduction of an enterprise risk management program may involve any of the following:

  • Employee risk awareness workshops
  • Risk identification across the organization
  • Use of risk dashboards
  • Incident reporting and incident response reports
  • Implementation of an executive risk board or advisory committee
  • Continuous monitoring of operational systems

An organization-wide ERM monitors the entire risk surface of an organization, which may include the monitoring of regulatory compliance, financial activities, and the physical attributes of the facilities as well as IT network risks. The ERM program is intended to raise the awareness of risks to the organization and enhance the reaction time to either mitigate the risk, thereby reducing the exposure, or react to the risk minimizing damage.

Continuous Monitoring

Managing the risk landscape involves many more actions than just identifying risks, establishing controls, and responding to problems. It involves a continuous monitoring requirement for the effective communication of the status of system internal controls, where threat events may be discovered immediately or as soon as possible after the incursion. Rapid identification of problems or weaknesses and quick response actions help to reduce the cost of possible risk events.

An ideal situation would be to continuously manage risk through real-time metrics. To perform this type of monitoring program, performance baselines must be established for all controls being monitored. These controls should trigger an alert in the event of state changes or even activities outside of normal operating parameters. The baselines must be monitored due to inevitable changes over time.

  1. Passive Monitoring Passive monitoring is characterized by capturing traffic crossing a network or device, usually from a span port or mirror port on a switch or router directly or off the network using a network tap. Typically, the information is in the form of network traffic or packets and is recorded in log files for future review. Although the data is collected in real-time directly off the network, it is reviewed offline at a later time.

    The amount of data captured during passive monitoring can be substantial. Various software tools may be used on the data to search for anomalies. During the log review, various anomalies or activities are noted, requiring some procedure to be followed.

    During passive monitoring, logs are accumulated through various machines manually or routed to a monitoring console. The serious deficiency with passive monitoring in many organizations is that logs may not be examined on a timely basis. The passive monitoring activity is extremely helpful in device troubleshooting.

  2. Active MonitoringActive monitoring is an approach in which special packets are introduced to the network in an effort to measure server and other device performance. It is ideal for the emulation of various scenarios and testing Quality of Service (QoS). System administrators utilize active monitoring to test the speed, accuracy, and statistical qualities of the overall network and various devices. On many occasions, these tests are required to meet various contractual compliance standards. Active monitoring is a test mechanism usually triggered by a system administrator or other individual in an effort to test a device for the entire network.
  3. Real-Time Monitoring Real-time monitoring may also be referred to as automated ­monitoring. Various items such as network intrusion prevention systems (NIPSs) and other devices continuously monitor intrusions based upon a variety of signatures, behavioral characteristics, or heuristics. When intrusion is detected, the device immediately alerts the operator of an event. If the event is determined to be an intrusion, it then becomes labeled as an incident.

    Depending upon established procedures and protocol for a designated type of intrusion, the Computer Emergency Response Team (CERT) may be required to perform necessary activities. The difference between real-time monitoring and active monitoring is that real-time monitoring is continuously listening to the traffic on the network and automatically ­sending alerts based upon some criteria.

    Even with real-time monitoring, human response may not be sufficient. Various devices such as a network intrusion prevention systems may discover an intrusion and immediately trigger an action, such as modifying a firewall rule that closes a port or blocks an IP address. This automatic response would be much more timely and efficient than a human response.

  4. Security Information and Event Management (SIEM) Security Information and Event Management (SIEM) software products are combined with hardware monitoring devices to provide real-time analysis of security alerts. In many cases, SIEM devices are sold as hardware units that monitor the network and devices on the network while aggregating data from all the various logs. Once the data is accumulated, the device correlates it. Through automated analysis, the data is turned into useful information.

    If an anomaly is located, operators are alerted based upon the severity of the intrusion. For instance, operators may be notified to mediate issues by special screens that appear on their consoles. These devices may also be configured to alert administrators by telephone or pager. Less severe intrusions for anomalies can be displayed on dashboards or in various reports.

Security Operations Center

Many organizations employ an in-house or a third-party security operations center (SOC) to monitor the physical perimeter, CCTV cameras, and facility access (Figure 5.5). An information security operations center (ISOC) primarily monitors the organization's applications, databases, websites, servers, and networks. The ISOC provides real-time ­situational awareness and is a primary center for network defense.

Image described by caption and surrounding text.

Figure 5.5 A typical security operations center

Many security practitioners are initially employed in an ISOC that provides an excellent opportunity to learn about the network and the business of the organization. The ISOC staff includes security engineers, system administrators, and other IT personnel and network professionals that have certifications such as the (ISC)2 Certified Information System Security Professional (CISSP).

Real-time security operations are sometimes outsourced by service providers that provide managed services to monitor the client's network on a 24/7 basis. This is ideal for both the small to medium-sized organizations as well as organizations that have numerous branch offices or facilities.

Threat Intelligence

Time is of the essence in the world of cybersecurity. On the other end of the spectrum, organizational resources such as personnel, money, hardware infrastructure, and necessary skill sets are scarce. Threat intelligence is a method by which the organization has up-to-the-minute information concerning zero-day attacks, threat profiles, malware signatures, and other vital pieces of information required to protect the organization's resources.

The sharing of cyber threat information and threat intelligence has been strained at best. It does make sense, however, that if one company is attacked, other companies might want to know so that they can protect their own assets. The problem has been with current laws. The sharing of information between companies in the same industry may appear to be in violation of Security and Exchange Commission regulations. Many are concerned that sharing specific threat intelligence with the government may violate privacy laws and regulations.

Various laws proposed in the U.S. Congress have failed on a number of counts. Unfortunately, although logic dictates the wisdom of sharing information, the business sector's fear of sharing information with the government is based on distrust due to the possible opening of a door that would allow the government to use data that is acquired in a surveillance program. While generally private-sector organizations agree that attack information should be readily shared, the disagreement is with proprietary data such as customer lists and personally identifiable information.

A way around the current legal hurdles and legislation or lack thereof is by using a shared central proprietary attack database. Appliances are currently on the market that identify zero-day attacks or other anomalies and immediately upload the data to a proprietary database. Other appliances from the same company can access the database and immediately protect their user networks from the identified attack. Using this technique, only the intrusion data is shared while the end users remain anonymous. This may become an extremely viable method for participating organizations to share attack information as soon as it's available.

Analyzing Monitoring Results

The performance and availability of networks are constantly under threat from both ­internal and external sources. The security practitioner requires an understanding of ­monitoring techniques as well as how to interpret the results gathered during the monitoring process. Network devices are sometimes widely dispersed throughout an organization. This may also include offsite networks and remote offices.

Network monitoring involves two general categories of information:

  1. Network Device Status Network administrators require increased visibility into the network infrastructure to help identify potential failures in critical services and applications that may be caused by number of reasons. When troubleshooting network equipment, it is important to seek the root cause of the problem. For instance, a server may be offline and unavailable, but a network monitoring tool indicates that the cause is actually a faulty router. This type of analysis will indicate that the server was not the problem, the router was.
  2. Device Management Protocols Network devices communicate their status using Simple Network Management Protocol (SNMP) v1, v2c, or v3 or Internet Control Message Protocol (ICMP). The two protocols access network devices utilizing special packets. For instance, using ICMP, the security practitioner can send a ping message to a device to determine if it is working. If the device is working, it will return the ping. ICMP is a simple method to determine the status of a network device.
  3. SMNP Protocol For a more detailed view of the network device, SNMP is usually employed. Many networks have hundreds or thousands of network devices, including hosts, servers, switches, and routers. This protocol poses a query to the SNMP-enabled device asking for the ­status of the function queried. Using the Simple Network Management Protocol, a network administrator may monitor network usage and performance, user access, and detect potential or existing network faults. SNMP is currently built into a large number of network products by manufactures and thereby can be used on a very large number of devices throughout the network environment.

    SNMP makes use of information contained in a management information base (MIB). The MIB is a collection of information stored in a database of network devices such as routers, switches, and servers and can be accessed using SNMP. The managed object can represent a characteristic of the device being managed. SNMP includes the following components:

    1. SNMP Simple Network Management Protocol (SNMP) is an application-layer protocol. It is one of the widely accepted protocols for managing and monitoring network elements. Most of the professional-grade network elements come with a bundled SNMP agent.
    2. Managed Device A managed device is a part of the network that requires some form of monitoring and management, such as, for example, routers, switches, servers, workstations, printers, UPSs, and other devices.
    3. SNMP Manager A manager or management system is a separate entity that is responsible for communicating with the SNMP agent implemented network devices. This is typically a computer that is used to run one or more network management systems.
    4. SNMP Agent The agent is a program that is packaged within the network device. Enabling the agent allows it to collect the management information database from the device locally and makes it available to the SNMP manager when it is queried.
    5. Management Information Database (MIB) The SNMP agent on each device may be specifically configured to obtain and store various device parameters. This information is stored in a database called a management information base (MIB). When queried by the SNMP manager, the SNMP agent on the device responds with the information contained in the management information base. The SNMP manager adds this information to the network management system (NMS).

Security Analytics, Metrics, and Trends

As a security practitioner, you will undoubtedly be involved with monitoring the security aspects of networks and network-connected devices. This is different than the device monitoring discussed earlier. Device monitoring is a method of gauging the health of a device and the health of the network. During security device monitoring, various alerts will be generated based upon established templates or parameters of a network device.

There's a wide range of log files that track various events on the network. Servers and devices record a wide range of items in logs sometimes called syslogs. In practice, syslogs are used to gather information, usually in a database that may contain data aggregated from many different types of systems. The messages maintained in a syslog can be customized through templates to indicate facility level, location, type of activity, and the severity of the alert (such as emergency, critical, or error warning). The type of message the program is logging is indicated by the facility level. Various applications across the organization may have different facility level codes. Of importance to the security practitioner will be the levels of message severity. There are eight severity levels:

  1. Code 0 Emergency This is the highest alert, possibly affecting major sections of the network or applications.
  2. Code 1 Alert This indicates a major problem, such as the loss of a central application or communication method.
  3. Code 2 Critical This represents the loss of a backup or secondary device.
  4. Code 3 Error When detected, this means that the failure of an application or system was not critical in nature.
  5. Code 4 Warning Warnings are usually set to indicate that a threshold is near. For instance, server utilization is at 90 percent.
  6. Code 5 Notice These messages indicate potential problems that should be investigated.
  7. Code 6 Information These are status messages and no action is usually required.
  8. Code 7 Debug Debug messages are utilized by developers and programmers.

Event Data Analysis

Event data analysis is the process of taking raw data from numerous sources, assimilating and processing it, and presenting the result in a way that can be easily interpreted and acted upon. While the science of mathematical data analysis is quite extensive, the security practitioner should understand the sources of the data and what the data represents.

For many years, dashboards have been used to represent the correlation of data from numerous sources. At a glance, practitioners, analysts, administrators, and executives have been able to absorb large amounts of data when represented as a digital numerical display, dial with a pointer needle, bar chart, pie chart, or some other visual display technique. Microsoft products, such as Microsoft Excel, offer excellent capability to display graphs and dashboards of complex data. Excel tools such as those for formatting content, conditional formatting, pivot tables, and advanced data filtering have been used in IT data analysis.

Many vendors offer specialized tools that can be used to filter log files, memory dumps, packet captures, and other sources of raw IT event data. In many cases, data templates are used or modified to select the specific event data of interest. For example, log data or packet captures can be analyzed to identify the IP source of specific traffic. Large amounts of data over a period of time can be quickly analyzed to determine various factors such as frequency, length of visit, actions taken or data accessed, and other valuable information that can be used during investigation.

Visualization

Visualization is a technique of representing complex data in a visual form rather than a tabular form such as a list. Even simple network diagrams can be extremely complex and hard to understand. Visualization takes advantage of the human brain's capability of learning and processing complex information based upon visual input. It has been proven that the brain will notice intricacies and details all at once. This allows us to analyze and absorb vast quantities of abstract information at the same time. Prior to graphic visualization, flow charts, graphs, diagrams, and other techniques were used to depict the relationship between items on a network. As networks became much more advanced and complex, paper-based graphing techniques became inefficient.

Information visualization and IT network design is an emerging field of information communication. Data analysis is indispensable when diagnosing problems facing large data communication networks. By viewing the entire network as a spatial diagram of connected nodes and network devices, the analyst can easily visualize large amounts of information and apply reason and insight rather than analytical skills in deciphering information. Many vendors supply visualization software that will map entire networks and allow the user to zoom out or zoom in to comprehend the amount of information they desire. Future iterations of visualization software will be able to display problems and recommend solutions graphically.

In the past, IT network information has been represented by various methods, including nodes illustrated as hanging off of a central bus, host workstations and servers as circles network designs, tree diagrams for Ethernet networks, and clouds representing the Internet or other mass communication technology. Figure 5.6 represents a data visualization of a large ­network. Major work centers are represented by the larger circles. Work groups of satellite offices are represented by smaller circles in groups, and finally individual nodes are represented by single circles.

Image described by caption and surrounding text.

Figure 5.6 Data visualization

CC-by-SA Calimius 2014

Communicating Findings

The results of data analysis van be communicated using a number of methods. The ­decision as to the medium or media to be used to convey the information is directly related to the speed of the communication. For example, displaying data on a computer screen is much faster than printing out a pile of paper. Also, utilizing a dashboard to quickly visualize data is much faster than analyzing a spreadsheet with columns of information.

Various roles within the organization require data to make informed decisions. Each of these roles generally requires the data in which they are specifically interested. Data should be presented in a manner that is actionable by the individual receiving the communication. The following roles might be receiving and acting upon data:

  1. Security Practitioner Data concerning device error situations, misconfigurations, tasks to be performed, and assignments based upon network conditions
  2. Database Administrator/Server Administrator Data concerning software or device performance, intrusions, penetrations, error conditions, and malfunctions
  3. Network Administrators Performance reports, reliability, traffic flow, quality of service, security situations
  4. Executives Operational summaries, executive reports, compliance reports

Summary

The Systems Security Certified Practitioner must be familiar with the organization's policies, standards, procedures, and guidelines to ensure adequate information availability, integrity, and confidentiality.

Security administration includes the roles and responsibilities of many persons within the organization who must carry out various tasks according to established policy and directives. Practitioners may be involved with change control, configuration management, security awareness training, and the monitoring of systems and devices. The application of generally accepted industry best practices is the responsibility of the IT administrators and security practitioners. Key administration duties may include configuration, logging, monitoring, upgrading, and updating products as well as providing end-user support.

In this chapter, the importance of policies within an organization was stressed. Without policies and the resulting procedures and guidelines, there would be a complete lack of corporate governance with respect to IT security. Security policies are the foundation upon which the organization can rely for guidance. These policies also include the concept of continuity of operations. Continuity of operations includes all actions required to continue operations after a disaster event occurrence. An associated policy of disaster preparedness and a disaster recovery policy provide the steps that are required to restore operations to a point prior to the disaster.

The configuration and management of various systems and network products may be the responsibility of the security practitioner. Various concepts of patching and upgrading systems were discussed in this chapter. Version numbering is the methodology used to identify various versions of software, firmware, and hardware, while release management includes the responsibilities involved in the distribution of software changes, upgrades, or patches throughout the organization.

This chapter covered data classification policies with regard to the responsibilities of the security practitioner. The practitioner can be involved in both the classification process and the declassification process for data management policies.

Security education and awareness training was also discussed in this chapter. The security practitioner may be involved in conducting or facilitating security awareness training courses or sessions. During the sessions, topics such as malware introduction, social media, passwords, and the implications of loss devices can be covered. Different groups of individuals will require training.

Business continuity and disaster recovery plans are important programs to initiate within an organization. The security practitioner will be involved in originating or maintaining such plans and will definitely be involved if the plan is exercised. A business impact analysis is central to the creation of a business continuity plan. The plan should be tested using a variety of methods to ensure that individuals are aware of their responsibilities and that all of the details of the plan have been considered.

Exam Essentials

  1. The Importance of Risk Understand that risk is a function of the likelihood of a given threat agent exploiting a particular vulnerability, and the resulting impact of that creates an adverse event on the organization.
  2. Information Risk Management (IRM) IRM is the process whereby risks to IT hardware, software, and information assets are identified and reduced to an acceptable level and ­controls are implemented to maintain that level.
  3. Risk Reduction The goal of any organization is to reduce risk to an acceptable level. The process of reducing risk is referred to as mitigation (another term for reduce). Risks can never be reduced to zero.
  4. Two Types of Assets There are tangible assets and intangible assets.
  5. Compensating Controls Compensating controls address any weakness in a primary control.
  6. Quantitative Risk Analysis and Qualitative Risk Analysis Quantitative risk analysis is numerical or value based, usually expressed as costs in dollars. Qualitative risk analysis considers items upon which a value may not be placed.
  7. Four Primary Methods of Treating Risks The four methods of treating risk are acceptance, transference, reduction, and avoidance.
  8. Risk Visibility Risk visibility is a method organizations use to understand risks and their potential impact.
  9. Continuous Monitoring Continuous monitoring is the passive, active, or real-time method of acquiring data about applications, devices, or the network.
  10. Obtaining Data from Network Devices Two primary protocols, ICMP and SNMP, are used to communicate with network devices.
  11. Data Visualization Visualization is a technique where very complex data is presented in graphical form for rapid visual analysis.

Written Lab

You can find the answers in Appendix A.

  1. Write a paragraph explaining the difference between quantitative analysis and ­qualitative analysis.
  2. What is a primary difference between a threat and a vulnerability?
  3. Briefly explain SLE, ALE, ARO, and EF.
  4. Describe the four methods of treating risk.

Review Questions

You can find the answers in Appendix B.

  1. What is a primary goal of security in an organization?

    A. Eliminate risk

    B. Mitigate the possibility of the use of malware

    C. Enforce and maintain the AIC objectives

    D. Maintain the organizations network operations

  2. Which of the following provides the best description of risk reduction?

    A. Altering elements of the enterprise in response to a risk analysis

    B. Mitigating risk to the enterprise at any cost.

    C. Allowing a third party to assume all risk for the enterprise

    D. Paying all costs associated with risks with internal budgets

  3. Which group represents the most likely source of an asset being lost through inappropriate computer use?

    A. Crackers

    B. Employees

    C. Hackers

    D. Flood

  4. Which of the following statements is not accurate?

    A. Risk is identified and measured by performing a risk analysis.

    B. Risk is controlled through the application of safeguards and countermeasures.

    C. Risk is managed by periodically reviewing the risk and taking responsible actions based on the risk.

    D. All risks can be totally eliminated through risk management.

  5. Which option most accurately defines a threat?

    A. Any vulnerability in an information technology system

    B. Protective controls

    C. Multilayered controls

    D. Possibility for a source to exploit a specific vulnerability

  6. Which most accurately describes a safeguard?

    A. Potential for a source to exploit a categorized vulnerability

    B. Controls put in place to provide some amount of protection for an asset

    C. Weakness in internal controls that could be exploited by a threat or a threat agent

    D. A control designed to warn of an attack

  7. Which of the following choices is the most accurate description of a countermeasure?

    A. Any event with the potential to harm an information system through unauthorized access

    B. Controls put in place as a result of a risk analysis

    C. The annualized rate of occurrence multiplied by the single lost exposure

    D. The company resource that could be lost due to an accident

  8. Which most closely depicts the difference between qualitative and quantitative risk analysis?

    A. A quantitative risk analysis does not use the hard cost of losses; a qualitative risk analysis does.

    B. A quantitative risk analysis makes use of real numbers.

    C. A quantitative risk analysis results in subjective high, medium, or low results.

    D. A quantitative risk analysis cannot be automated.

  9. Which choice is not a description of a control?

    A. Detective controls uncover attacks and prompt the action of preventative or corrective controls.

    B. Controls perform as the countermeasures for threats.

    C. Controls reduce the effect of an attack.

    D. Corrective controls always reduce the likelihood of a premeditated attack.

  10. What is the main advantage of using a quantitative impact analysis over a qualitative impact analysis?

    A. A qualitative impact analysis identifies areas that require immediate improvement

    B. A qualitative impact analysis provides a rationale for determining the effect of security controls

    C. A quantitative impact analysis makes a cost benefit analysis simple

    D. A quantitative impact analysis provides specific measurements of attack impacts

  11. Which choice is not a common means of gathering information when performing a risk analysis?

    A. Distributing a multi-page form

    B. Utilizing automated risk poling tools

    C. Interviewing fired employees

    D. Reviewing existing policy documents

  12. Which choice is usually the most-used criteria to determine the classification of an information object?

    A. Useful life

    B. Value

    C. Age

    D. Most frequently used

  13. What is the prime objective of risk management?

    A. Reduce risk to a level tolerable by the organization

    B. Reduce all risks without respect to cost to the organization

    C. Transfer all risks to external third parties

    D. Prosecute any employees that are violating published security policies

  14. A business asset is best described by which of the following?

    A. An asset loss that could cause a financial or operational impact to the organization

    B. Controls put in place that reduce the effects of threats

    C. Competitive advantage, capability, credibility, or goodwill

    D. Personnel, compensation, and retirement programs

  15. Which is not accurate regarding the process of a risk assessment?

    A. The possibility that a threat exists must be determined as an element of the risk assessment.

    B. The level of impact of a threat must be determined as an element of risk assessment.

    C. Arisk assessment is the last result of the risk management process.

    D. Risk assessment is the first step in the risk management process.

  16. Which statement is not correct about safeguard selection in the risk analysis process?

    A. Total cost of ownership (TCO) needs to be included in determining the total cost of the safeguard.

    B. It is most common to consider the cost effectiveness of the safeguard.

    C. The most effective safeguard should always be implemented regardless of cost.

    D. Several criteria should be considered when determining the total cost of the safeguard.

  17. Which option most accurately reflects the goals of risk mitigation?

    A. Determining the effects of a denial of service and preparing the company's response

    B. The removal of all exposure and threats to the organization

    C. Defining the acceptable level of risk and assigning the responsibility of loss or disruption to a third-party, such as an insurance carrier

    D. Defining the acceptable level of risk the organization can tolerate and reducing risk to that level

  18. Of the following choices which is not a typical monitoring technique?

    A. Passive monitoring

    B. Active monitoring

    C. Subjective monitoring

    D. Real-time monitoring

  19. Which option is not a risk treatment technique?

    A. Risk acceptance

    B. Ignoring risk

    C. Risk transference

    D. Risk reduction

  20. Which of the following is not a control category?

    A. Administrative

    B. Physical

    C. Preventative

    D. Technical

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.222.182.66