Prioritizing Risk Elements That Require Risk Mitigation

One of the ways that the most important countermeasures can be identified is by prioritizing the risk elements. Risks occur when a threat exploits a vulnerability. The importance of a risk can be determined by estimating its likelihood and impact. The likelihood of a risk is a reflection of how likely it is that a threat will exploit a vulnerability, and the impact identifies the damage to the organization. Risks that are highly likely to occur and will have a high impact are the most important.

Using a Threat Likelihood/Impact Matrix

Threats can negatively affect confidentiality, integrity, or availability. The severity of a threat is evaluated by identifying the likelihood that a threat will affect one of these elements, and the impact is evaluated by determining the extent to which it will be affected.

NOTE

Phishing is a form of social engineering cybercrime whereby the phisher attacks its targets by posing as a trusted source, often luring the targets to provide sensitive data, for example, their bank details. Phishing can also be used to deliver malware. Individuals who are not educated in recognizing threats of this nature often lose out, especially when the attackers are able to access their personally identifiable information.

TABLE 11-2 shows a sample threat likelihood/impact matrix. This matrix can be used to determine the priority of various threats. Threats with a 0 to 10 percent likelihood of occurring would be assigned a value of 10 percent, threats with a likelihood between 11 and 50 percent would have a value of 50 percent, and threats with a likelihood between 51 and 100 percent would have a value of 100 percent. Similarly, impact values of 10, 50, or 100 would be assigned, depending on the impact to the organization.

TABLE 11-2 A Threat Likelihood/Impact Matrix
A table listing a threat-likelihood-impact matrix.

Prioritizing Countermeasures

A threat likelihood/impact matrix can be used to prioritize risks and countermeasures. Risks with higher scores would result in a higher loss and should be addressed before risks with lower scores.

Risks are evaluated based on current in-place countermeasures. For example, if an organization was not using antivirus software, the likelihood would be high that systems could become infected. If several systems became infected, the impact would also be high. A high likelihood of 100 percent times a high impact of 100 equals a score of 100.

In another example, the company has antivirus software installed on all its systems, and, in the past year, only one malware incident had caused problems after a single user had disabled the antivirus software. The malware tried to spread but was quickly detected by antivirus software on other systems. In this example, both the likelihood and impact are low, giving the occurrence of the threat a score of 1.

FYI

The numerical values assigned to the word values can be different if desired. For example, a low impact could be assigned a value of 0 instead of 10. A high likelihood could be assigned a value of 90 percent instead of 100 percent. Additionally, there can be more than three data points, and different names can be used, such as low, moderately low, moderate, moderately severe, and severe. TABLE 11-3 shows an example of how the threat likelihood/impact matrix can be used to prioritize threats. Each of the threats is assigned a likelihood and impact based on current countermeasures.

TABLE 11-3 Threat Scores Used to Prioritize Threats
A table listing the details of threat scores used to prioritize threats.

The information from Table 11-3 shows that the greatest current threats are the two with a score of 50:

  • Attacks on DMZ servers
  • Loss of data on a key database server

These two threats would probably be given high priority for addressing their recommended countermeasures.

The attacks on DMZ servers are a threat because these servers are updated only once every six months. These updates are intended to fix bugs and vulnerabilities that have been discovered since the software was released. If the bugs aren’t fixed, the servers are vulnerable. Many attackers look for servers that do not have recent patches installed, giving this risk a high likelihood.

In this case, the solution is simple. A countermeasure would be implemented to ensure that the servers are up to date. Several ways are available to do this, but, if a risk assessment recommended a specific countermeasure and it was approved, it should be used.

Similarly, Table 11-3 indicates holes in the backup procedures. First, backups aren’t reliable. Their unreliability could be because there is no backup plan or no backup procedures, or test restores are never done to test the backups. A common countermeasure to establish the reliability of backups is to develop a backup plan and backup procedures. The plan could include a requirement to perform test restores on a weekly basis.

Test Restores as Part of a Backup Plan

A common management saying is that what can’t be measured can’t be managed, and this includes the effectiveness of backups. However, it is relatively easy to measure the effectiveness of backups by regularly performing test restores and tracking the success rate.

Test restores are frequently mandated as part of a backup plan. A test restore simply retrieves a backup tape and attempts to restore data from it. If the data can be restored, the test was successful. If the data cannot be restored, the test was not successful. An unsuccessful test should be investigated.

A test may be unsuccessful due to many reasons:

  • The tape could be old or corrupt—The length of time tapes are kept in rotation should be reevaluated, or higher-quality tapes should be purchased.
  • The tape drive could be faulty—The problem needs to be fixed as soon as possible. If the drive is faulty, all the backups are suspect.
  • The backup procedures could be faulty—The procedures should be reviewed and corrected. If the procedures are incorrect, all the backups could have problems.

Whatever the problem, the good news is that a test restore discovers it before a crisis. If actual data were lost and couldn’t be restored, the problem would be much more serious.

Companies that measure backups often strive for a success rate of over 95 percent. Companies that don’t measure the effectiveness of their backups could have a success rate anywhere from 0 percent to 100 percent. They just don’t know until data is lost whether the data can be restored.

The threat scores aren’t necessarily perfect. They do take a little human interaction to ensure that the organization’s needs are met. For example, the threat of “loss of data due to a fire” has a score of 10. Just because this score is less than the two scores of 50 doesn’t mean it can’t be addressed earlier.

Management may decide that, even though the score is low, the impact is sufficiently high that it needs to be addressed as soon as possible. The countermeasure for this threat is simple. Store a copy of backup tapes off-site.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.217.4.206