Appendix B RISK ASSESSMENT METRICS

This appendix will propose and define a set of quantifiable metrics that can be used to mathematically calculate risk. Most of this material evolved out the Trident-Risk Assessment process (T-RAP) that was published under the title Risk Management Theory and Practice.1 This study was sponsored by the U.S. Air Force Information Warfare Center. Subsequent work with the theories and concepts therein produced a set of equations and analytical tools that was incorporated into a series of technology offerings. I cite these modeling processes here as a complement to the McCumber Cube methodology and the basis for the risk management assessment methodology presented in Chapter 5.

This risk assessment process captured the major elements of what specific information needed to be gathered, quantified, and analyzed in order to calculate a measurement of baseline and residual risk elements. Baseline risk is the sum total of the anticipated adverse impact that can result if a threat exploits a vulnerability of the assets under review. The residual risk is that risk that remains after the application of chosen safeguards. These are the key elements of understanding security in an operational environment.


OVERVIEW OF THE BASIC RISK ASSESSMENT PROCESS

The basic risk assessment process (Figure B.1) is an iterative methodology where the operational environment is a key component. This aspect sets it apart from the McCumber Cube methodology. The three major activities associated with this process are the policy and data capture, risk analysis, and decision support phase. During the first activity, the various data elements are gathered and quantified. These data are captured and categorized as three separate lists of threats, vulnerabilities, and assets. Ultimately, they will create sets of triplets that will be used to rank order the risk to the information assets. The result of this phase will be the creation of a baseline risk measurement—the risk to the information resources that exists before safeguards are employed to mitigate risk.

9781135488963_227_01

Figure B.1 Basic Risk Assessment Process

Once the baseline risk is calculated, safeguards will be selected to mitigate these risks. The measurement at this phase of the process is to determine to what degree the selected safeguards mitigate the risk. This calculation will produce a metric known as residual risk. Residual risk is the risk that remains after the application of the selected safeguards. This is the mathematics that recognizes the fact that no security program provides 100 percent risk mitigation. Such a risk avoidance capability is not achievable in information systems environments.

With these metrics to manipulate, the security analyst or practitioner can then run through a series of analyses to assess the effectiveness different safeguard options and compare the impact on residual risk while keeping track of the investment required for each set of options. In this way, different decision support methodologies can be applied to select and implement the most cost-effective security architecture based on the value of the information resources. Decision support methodologies such as weak link analysis, cost-benefit analysis, linear programming, and goal programming can be employed.


RISK ASSESSMENT METRICS

Now that you have an overview of the overall risk assessment process, we need to take a closer look at the empirical data that needs to be gathered to make this process work mathematically. We will also present a quick overview of the various decision support calculations that can be used to determine the most effective security architecture for the environment under review. Finally, we will show where data can be located to support this process.


THREAT METRICS

In Chapter 4, we showed how threat was decomposed as environmental and human. We also decomposed the human threat element into eight categories to better capture and understand all possible human-based threats.

To quantifiably capture the elements of human threat, various data sources need to be analyzed. The threat factors can be gathered by analyzing historical data and projections of trends in human threat experience. Also, statistical and expert analysis can be incorporated to provide a starting point for developing a custom or more tailored threat analysis. The purpose of this analysis is to identify and rank those threats that apply specifically to the assets under review or the organization itself.

There are three primary areas of human threat data that need to be factored in the risk assessment process:

  1. System connectivity
  2. Motivation and capability of the threat
  3. Occurrence measurement for a threat class

We will look at each of these three areas.

System connectivity or access is the first element. It is designed to measure the amount of presence the human threat has to an organization. Obviously a trusted insider would represent more of threat to the organization’s information than someone without these physical access rights. Physical access measures the amount of physical presence a threat could have to the organization. Electronic access measures the amount of electronic or logical presence a threat could have to the organization.

The motivation and capabilities of the human threat can be measured by a threat profile to determine the relative motivation and technical capability of a threat. Motivation measures the degree to which a threat wants to cause harm to the organization. Capability measures the knowledge that a threat possesses about the use of the information infrastructure and technology systems of an organization.

The occurrence measurement ranks the historical data and projected likelihood of a similar occurrence the way an insurance company actuarial table projects the likelihood of a person having an accident. The occurrence measurement is the approximation of the probability of occurrence. The occurrence measurement takes into account the number of incidents attributable to threat classes and the population size of sample.

The human threat measurement then is a function of the three primary elements of degree and mode of access, the threat profile, and the occurrence measurement. The threat measurement of the risk assessment process then is mathematically:

Threat measurement=f (access, threat profile, occurrence measurement)


VULNERABILITY METRICS

The chapter on vulnerabilities presents the state-of-the-art in vulnerability and exposures libraries. Vulnerabilities are those specific technical weaknesses that can be exploited to impact an asset. Vulnerabilities exist in system and network hardware, system and network operating systems, system and network applications, network protocols, connectivity, current safeguards, and even the physical environment. To use this information in the risk assessment process, specific quantifiable analyses need to be performed. There are aspects to the vulnerabilities that can be measured and they are necessary to identify and rank these vulnerabilities. The elements of the vulnerability measurement are exposure and the vulnerability subcomponent.

The exposure metric provides a way to determine if vulnerability can be exploited via physical or electronic exposure to the vulnerability. Physical exposure can be defined as a binary value that determines if the vulnerability can be exploited via physical access to the system with the vulnerability. Electronic exposure can also be a binary value that determines if the vulnerability can be exploited via electronic access to the system with the vulnerability.

The next measurable vulnerability metric is what is called the vulnerability subcomponent. It consists of several elements. The vulnerability subcomponent is the measurement of the severity of the vulnerability by measuring these factors:

  • Potential damage—a measurement of the potential damage caused by exploitation of this vulnerability.
  • Relative age—a measurement of when the vulnerability was discovered.
  • Information available—a measurement of the amount of information available for the vulnerability.
  • Area of impact to operations—these are binary values to determine the operational concerns that are impacted by the vulnerability.

Once these elements are captured and quantified, the mathematical function for the vulnerability component of the risk assessment process is:

Vulnerability measurement=f (exposure, vulnerability subcomponent)


ASSET METRICS

Most of the concepts of asset metrics were covered in Chapter 3. However, in the risk assessment process outlined here, certain aspects of the information valuation need to be accurately captured to perform the mathematical and analytical functions necessary for a risk assessment. These asset measurement functions for this process are part of a comprehensive analysis that includes the elements of:

  • Sensitivity
  • Criticality
  • Perishability
  • Recoverability
  • Quantity
  • Quality
  • Economic value

These are the elements of asset valuation that were developed for this risk assessment process and are listed here as a guide.

Sensitivity is the relative measurement of what the organization can expect regarding the degree of damage to the organization if the existence or disclosure of the information was realized. Various values (on a numerical scale or relative scale such as high, medium, and low) can be assigned to reflect this information asset measurement.

Criticality is also a relative measurement and indicates the degree to how vital this information resource is to the performance (or mission) of the organization. Lower values are assigned if loss or degradation of the information does little to impact the ability of the organization to accomplish its mission. If the information asset is absolutely critical to mission accomplishment, the highest value is assigned.

The concept of perishability is central to determining the time value of the information. There have been entire books dedicated to presenting the concept of the perishable nature of many information resources. Suffice it to say, the imputed value of information can change as it ages, so this element of valuation must be taken into account.

In the risk assessment process, it may be necessary to have an additional measurement for recoverability. In the McCumber Cube methodology, this element is primarily accounted for by the availability attribute, however, this risk assessment methodology may refine this attribute by also assigning value to the relative measurement of how easy it is to recover the asset in the event it is destroyed, damaged, or distorted.

Quantity and quality are two values that may need to be computed as well. For certain information assets, the more you possess, the greater the value. An example could be a potential client mailing list. For obvious reasons, a company would generally pay more for a larger list of names than for a much smaller list, unless specific targeted clients are called for. In the case where the information asset value is determined based on the rarity of the information, it may make sense to assign higher values for more specific or targeted information. Quality is a subjective attribute of information assets that is based on the level or degree of excellence.

Finally, a more specific economic value can be assigned based on the procurement or replacement cost of the information resources. If your organization has specific data on how much it costs to obtain, maintain, and replace information assets, you can use these empirical measurements at this point.

Once these elements are captured and quantified, the mathematical function for the asset component of the risk assessment process is:

Asset Measurement=f (sensitivity+criticality+perishability+ recoverability+quantity+quality+economic value)


BASELINE RISK FACTORS

Once you have captured and calculated these elements, you can then create a quantifiable baseline risk measurement. Remember, a baseline risk measurement is the sum total of the anticipated adverse impact that can result if a threat exploits a vulnerability of the assets under review. This risk measurement is computed thus:

Risk Measurement=Threat Measurement×Vulnerability Measurement×Asset Measurement

Obviously, this simplistic formula is reiterated for each triple that is created by parsing the threat, technical vulnerabilities, and various assets. This can make for a large chain of computations, but it does provide the structured, empirical analysis necessary to determine what is meant by the concepts of risk and security in IT systems. Only when the elements of risk assessment are quantified can we create the tools necessary to help the security practitioner or analyst determine how much security is enough.


SAFEGUARD CALCULATIONS

Safeguard values are quantified by determining the risks that are mitigated by each safeguard proposed. A safeguard can mitigate risk for any of the factors that comprise the elements of the threat measurement, vulnerability measurement, or asset measurement. To accurately account for the entire spectrum of safeguards, technical, procedural, and human factors safeguards should all be considered.

Most technologists are familiar with the technical risk mitigation aspects of safeguards. In other words, they consider only those safeguards that can impact technical vulnerabilities. This is part of the vulnerability-centric security model. However, the McCumber Cube methodology indicates that safeguards can also be procedural and human factors based. In the risk assessment process, it is critical to consider safeguards in each of these categories as you do when employing the McCumber Cube methodology.

To fully analyze the entire complement of safeguards it is also important to remember that safeguards can impact more than just vulnerability measurement factors. Safeguards can be used that reduce the risk from threats and can also be used to change elements of assets to reduce the risk. For example, by adding logon screens that warn users of their legal liabilities, the security practitioner may cause potential attackers to reconsider an attack, thereby potentially influencing (mitigating) the risk from specific human threat categories. In the case of asset risk mitigation, some organizations segregate information resources based on its calculated value to provide enhanced security or to limit access paths. In this case, by moving assets, this procedural safeguard is employed to mitigate risk to the asset variables.

To capture the degree or the extent to which safeguards mitigate risk, this risk assessment process uses the term counter values. Counter values (depicted as CV in the equations) are used to reduce the impact of risk on the affected factors for each of the threat, vulnerability, and asset measurements. They are the aspects of the safeguard as applied to the appropriate risk elements. They are included mathematically in each of the equations of baseline risk in a manner that allows us to determine the residual risk. Remember, residual risk is the risk that remains after the application of the selected safeguards; this is how we calculate it using counter values for each of the measurements: Threat measurement with safeguards included is now defined as:

((Physical access−CV)+(electronic access−CV))* ((capability−CV)+(motivation−CV)*(occurrence measurement−CV))

Vulnerability measurement with safeguards included is now defined as:

(Physical exposure+electronic exposure)*+ ((potential damage−CV)+age+(information−CV) confidentiality+integrity+availability+reliability+usability))

Asset measurement with safe guards included is now defined as:

(Sensitivity−CV+criticality−CV+perishability−CV+ recoverability−CV+quantity−CV+quality−CV+ economicvalue−CV)

The calculations are now run with the counter value (safeguard) factored into the equation to help determine the residual risk. If the baseline risk has been calculated with relative accuracy and the counter values have been developed consistently, you now have both baseline and residual risk calculations to perform a variety of analyses to provide decision-making capabilities for your risk management (security) program. Before we cover these analyses, it is important to look at how the type of raw data needed in the previous calculations should be gathered and applied.


OBTAINING RISK ASSESSMENT DATA

This risk assessment process may at first appear daunting or even merely a hypothetical exercise if there is no way to determine and rate the various factors and counter values. The data necessary to perform this type of analysis is not currently in popular use either in technical or policy circles. However, this data is not dissimilar to actuarial data used in the insurance industry.

Some of the factors presented above would need to be created by the analyst using information generated by the organization. This would primarily be an assessment based on the information systems and on the relative value of information assets they transmit, store, and process. These values could use broad categories like low, medium, and high that could be simply quantified as 1, 2, and 3. They could be defined more granularly on a much larger scale and could include weighted values. In any case, this data could be compiled as an integral part of the McCumber Cube methodology or used as an adjunct for the operational risk assessment process alone.

To obtain more objective, and hence more accurate, data for such aspects as threat factors and vulnerability components, the analyst can employ statistical data collected by the organization, outside agencies, trade or industry groups, or even insurance companies. Another source for this data would be the expert opinion of professionals in IT operations, security, or technology management for use in developing weighting factors. All of these sources could be used for input to make valuation judgments regarding the various risk measurement variables of threats, vulnerabilities, and assets.

Some security professionals have sneered at the need to collect and employ such data. These are often the same people who appear to prefer the seat-of-the-pants approach to security. Unfortunately, the simpler days when a security professional had to merely deploy a considered collection of point solutions to create a security program are fast coming to an end. The information systems technology environment for most organizations is far too complex and the cost to design, acquire, deploy, and maintain the information security systems is too significant to use a unstructured, qualitative approach. To determine the elements of a complex albeit comprehensive security program, a structured analysis like the McCumber Cube methodology is called for. To answer the questions about how much and what type of security is required in an operational environment, an empirical risk management process is required. We can now examine some analyses that can be performed with the data generated by the risk assessment process.


RISK ASSESSMENT DECISION SUPPORT TOOLS

Granted, the effort required to perform a quantitative risk assessment can be onerous. However, once you have cranked through the data gathering and calculations required in the risk assessment process, you have all the raw data required to perform a variety of extremely valuable decision support analyses. The results of these analyses can give you the hard facts needed to define, develop, justify, and implement an operational information security program for any type of application or organization. Although there are numerous techniques and analyses to perform, I will cover four of the more important ones.

The first possible use of the risk assessment data is for weak link analysis. Weak link analysis in the case of information security is based on the assumption that greatest baseline risk measurement represents the greatest information security risk to the organization. As you calculate the baseline risks, those with the highest risk measurement are those that require the application of targeted safeguards (technical, procedural, and human factors). As successive iterations of the analysis are performed, you can apply safeguards to the highest risk measurement and repeat until a constraint (such as overall cost) is reached.

The second analysis is a cost-benefit analysis using the risk assessment data. In this decision support analysis, you can use the risk assessment process to compare residual risks using comparable security safeguard solution sets. The set of most cost-effective safeguards can then be selected based on its ability to meet the quantitative risk mitigation criteria established for the information resources under review. These are the cost-benefit ratios that allow you to select the suite of safeguards that provide the best protection for the least cost.

Two other decision support methodologies that could be employed with this data are linear programming and goal-oriented programming. In linear programming, you can allow for the optimization of multiple security variables using different safeguard solution sets. Goal-oriented programming allows the analyst to set goals or constraints and allows the system to select a safeguard set that meets the security requirements of the system within these constraints.

Whatever decision support tools you choose to employ, the risk assessment metrics described here can provide you with the raw data necessary to make critical decisions about your information security program. It provides a structured, empirical approach that works hand-in-hand with the McCumber Cube methodology to define, justify,acquire, deploy, and maintain a cost-effective security program in any systems environment.


REFERENCE

1. Trident Data Systems, Risk Management Theory and Practice: An Operational and Engineering Support Process [report], March 30, 1995

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.20.156