Risk Assessment Challenges

When completing a risk assessment, several challenges must be addressed and overcome. Many of these challenges are dependent on the type of assessment that was chosen. Both the quantitative and qualitative assessments have their own challenges. These challenges were listed in the previous section as limitations.

Several additional challenges exist. These include:

  • Using a static process to evaluate a moving target
  • Availability of resources and data
  • Data consistency
  • Estimating impact effects
  • Providing results that support resource allocation and risk acceptance

These challenges are explored in the following sections.

Using a Static Process to Evaluate a Moving Target

As mentioned previously, a risk assessment is a point-in-time assessment. It evaluates the system against known risks at a specific time and considers the risks based on current controls. In other words, the risk assessment is a static process. However, security is not static. Risks can and do change because attackers and attacks are constantly changing.

As attackers become successful at an attack, security experts implement controls. At some point, these attacks are less successful. Attackers then learn new methods of attack. Security experts modify the controls or implement new controls, and the battle continues daily.

Some threats and vulnerabilities look as if they’ve been mitigated successfully and no longer present a risk. Then, suddenly, they appear as another threat. Domain Name System (DNS) cache poisoning is a good example. DNS cache poisoning can cause a system to resolve a website name to a bogus Internet protocol (IP) address. Users may try to access Acme.com with a web browser, but, instead, they are redirected to Malware4u.com. DNS cache poisoning was identified years ago as a significant threat. It was successfully mitigated and fell into disuse. From an IT security perspective, it almost became a historical footnote.

Then, in the summer of 2008, a flaw was discovered and published by Dan Kaminsky. Quick as a flash, DNS cache poisoning was once again an issue. Once the results were published, attackers quickly learned how to exploit the vulnerability. DNS cache poisoning was once again raised as a serious concern. Security controls addressed this flaw, and DNS cache poisoning became rare again.

If a risk assessment was completed in March but a vulnerability is announced in June that affects the system, the validity of the assessment is affected. For this reason, being aware of new risks as they become known is necessary.

FYI

One way to stay informed of vulnerabilities is to subscribe to alerts from the US-Computer Emergency Readiness Team (US-CERT). Keeping up with the alerts shows new vulnerabilities are discovered every week. Some of these vulnerabilities represent very serious risks. Signing up to receive emails and alerts from the US-CERT team can be done at http://www.us-cert.gov/mailing-lists-and-feeds/.

Availability of Resources and Data

Availability challenges are present in two primary areas. One area relates to the availability of resources, and the other area relates to the availability of data. Both areas are important to address early in the process of the risk assessment. If they are not addressed, they can seriously affect the quality.

Concerning resources, personnel involved in the assessment should be knowledgeable about the system they are assessing. With a higher level of expertise, a higher-quality assessment can be expected. If the risk assessment team does not have the high level of knowledge and experience they need, they may have to resort to guessing.

The risk assessment needs support from upper management. This support will help ensure that management dedicates adequate resources to the team. If a leader of a risk assessment has problems getting support from a specific department, upper management can help. On the other hand, if upper-level support is not available for the project, the leader will likely get less and less support.

As far as data goes, its availability is also important. Data availability will drive the type of assessment that is performed. For example, if a lot of internal historical data related to actual performance and outages is available, it can be used to perform a quantitative risk assessment. This historical data can be used to identify values for SLE and ARO. If this data isn’t available, a qualitative risk assessment will probably be done instead.

Without the availability of the right personnel and the right data, the risk assessment becomes much more difficult to complete. If the issues are addressed early, the chances of success will be better.

TIP

Even having access to historical data, performing a qualitative risk assessment may still be chosen. One of the reasons for choosing to do a qualitative instead of a quantitative risk assessment might be time. A qualitative risk assessment can usually be completed more quickly than a quantitative risk assessment.

Data Consistency

Another challenge with risk assessments is data consistency. Data consistency refers to the accuracy of data. Several issues can affect data consistency. These include:

  • Differences in data format
  • Changes in data collection
  • Changes in the business

Each of these concerns can directly affect the accuracy of the data. However, even having less than 100 percent accurate data doesn’t mean that the data can’t be used.

Some risk assessments address the accuracy of data with an uncertainty level, which indicates how valid the data is. If all conditions were ideal, the data would be 100 percent accurate. In this case, the uncertainty level would be 0 percent. In the real world, however, a 0 percent uncertainty level is unlikely.

For example, historical data could indicate that a website generates approximately $2,000 of revenue per hour. Current data could indicate this trend is continuing with slight growth. The certainty that the data is accurate could be 80 percent, or a 20 percent uncertainty level. When using this sales data to calculate the SLE, the uncertainty level could also be provided.

Differences in Data Format

Data format can affect how data is used, manipulated, and interpreted. In general, a database is more efficient for querying and manipulating large amounts of data. However, data could have originally been created in a word processing document or a spreadsheet. If data was migrated from one format to another, weighing the accuracy of the data differently can be chosen.

For example, data was previously stored in a Microsoft Excel worksheet but is now stored in a Microsoft SQL Server database. When it was stored in the worksheet, it met the needs of the user. However, it wasn’t easy to view the data from different perspectives or query it to show different totals and subtotals. In the database, it is now easy to use queries to view the data with multiple perspectives. The data may be very similar, but the database now allows a deeper perspective on the data to be gained.

With this in mind, the users who worked with the Excel worksheets may have drawn accurate conclusions. However, the conclusions may not be as substantial as conclusions drawn from data stored in a database.

If the data is from different sources, recognizing that it may have been interpreted differently is important. This might cause inconsistencies when comparing the data. All of this can affect the uncertainty level of the data.

Changes in Data Collection

Changes in data collection can also affect the accuracy of data. The primary change that is likely to be seen is a change from manual data collection to automated data collection.

There are many wonderful things to say about humans, but the truth is that humans aren’t the best at mundane repetitive tasks. Computers are. When people collect and enter data manually, errors should be expected.

Controls and checks help find these errors. Input validation methods verify that the data is valid. For example, a ZIP Code has five digits. An input validation method detects invalid ZIP Code entries of four or six digits. Additionally, one employee can double-check data entered by another employee.

Manual data entry can negatively affect the uncertainty level. Failing to double-check the data for accuracy can also affect the uncertainty level.

On the other hand, if data is collected using automated methods, the predictability of data is much higher. If data is collected, stored, and manipulated using automated methods, the uncertainty level will be much lower.

Changes in the Business

The amount of business a company does this year will usually be different from last year, which is often due to growth. For some businesses, however, changes in the amount of business could be due to loss of market share or some other business reason. The fact is, sales are rarely stagnant. In fact, stagnant sales are perceived negatively.

It is important to understand what happened in the past so that what will happen in the future can be predicted. However, the future is never exactly the same as the past. For example, a website may have had an average of $2,000 per hour in revenue last year. However, the sales over the Christmas season may have doubled over the previous year, and current predictions are that this year’s sales may also double. All this together may require the modification of the average sales per hour from $2,000 to $4,000.

Similarly, the company could have lost market share in a certain sales market. The loss may have been because the company allocated less money for research and development or marketing or because of any of a dozen other reasons. So, if sales are decreasing, this fact should be taken into consideration.

Then again, although sales data may show that sales are decreasing, a new manager may be instituting changes to increase sales, and preliminary signs may be showing an increase. All of these factors could lead to changing the uncertainty level of the numbers used.

Estimating Impact Effects

The potential impact of any risk is difficult to estimate. The most important thing to realize is that the impact is just an estimate. If what will happen in the future could be accurately predicted, working in the IT or cybersecurity field would probably not be an option.

When estimating the impact effects, several factors come into play, which is true even when accurate historical data is available. For example, a website could have been attacked, resulting in an outage of several hours. While troubleshooting the outage, the technicians learn quite a bit. Yes, the primary focus is to resolve this current outage. However, the knowledge and experience gained from it is tucked away. The next time the server suffers an outage, the recovery time may be much quicker.

Even this example is dependent on several variables. A company with a high turnover rate of IT professionals doesn’t build up the same experience level as does a company with a low turnover rate. If a system is down for the same reason it was down six months ago but it’s the first time a new technician has seen it, the outage will likely be just as long.

On the other hand, previous attacks may have been successful due to vulnerabilities in the system. If these vulnerabilities were discovered, they were likely corrected. However, even if they were corrected, the corrections may not have been documented.

Without the documentation of the previous corrections, everything may appear to be the same today as it was during the previous attack. Instead, several changes may have been implemented that reduced the likelihood of the attack or the impact of the attack. In this example, the uncertainty level may be dependent on changing management practices in the organization.

NOTE

Change management is a process that ensures that changes are made only after a review process and that the changes are documented. It is an important process covered by the Information Technology Infrastructure Library (ITIL). Many companies use change management processes even if they aren’t following ITIL practices directly.

Providing Results That Support Resource Allocation and Risk Acceptance

The results of a risk assessment need to be useful, which should come as no surprise. However, security professionals can fall into the trap of thinking security must be pursued at all costs, which isn’t true. A proper balance between profitability and survivability must constantly be considered.

Two important points to consider are:

  • Resource allocation
  • Risk acceptance
Resource Allocation

Security teams don’t have an unlimited amount of funds or number of personnel. Instead, security will be allocated a finite percentage of resources, an idea that is important to keep in mind when performing the risk assessment.

Any recommendations need to be realistic. They need to consider the culture of the business and the actual potential for the recommendations to be accepted.

Risk Acceptance

Some organizations are willing to accept more risks than others, which is neither right nor wrong; accepting more risk is just the way a business operates. For some businesses, risk taking is an indication of how innovative they can be. When creating a risk assessment, being aware of the business culture is important.

There are two sides to accepting more risk:

  1. The greater the risk, the greater the rewards.
  2. Greater risks can result in larger losses.

For example, many companies are in existence today that had stock for sale for less than a dollar at one point. Anyone who bought $10,000 of their stock would be a millionaire today. However, few actually bought that $10,000 worth of stock. The reason is that, when the stock was at a low price, no one knew whether the company would survive. Some people took the risk and were greatly rewarded. However, others took similar risks on other companies that have since gone bankrupt. Their risky investment turned out to be a huge loss.

Remembering that senior managers make the big decisions in a company is important. They are responsible for identifying which risks to mitigate, share or transfer, avoid, or accept. Recommendations made to senior managers should be based on and consistent with what residual risk they are expected to accept.

NOTE

Residual risk is any risk that remains after management has decided to implement controls. Senior management is responsible for making these decisions. Additionally, senior management is responsible for any losses that occur as a result of residual risk.

Regardless, the responsibility still exists to present all of the data. If some of the recommendations clearly don’t look as if they will be accepted, they can be included in the report but not in the list of actual recommendations.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.133.151.220