CHAPTER 4 ________________________________
Managing Risk in the Federal Fiscal Environment

Nancy Fagenson Potok and Aaron B. Corbett

Do you know what your risk is of not being able to accomplish your agency’s mission? If you are like many of today’s government managers, you started thinking about risk in a serious way after September 11, 2001. During the past eight years, you have worked to improve awareness among your employees about IT security. You have increased physical security at your location and, if you are a federal manager, you have been implementing Homeland Security Presidential Directive 12, known familiarly as HSPD12, which requires identity verification, background checks, and the collection of biometric information for the people who get badges to enter your buildings. If you are a federal financial manager, you also have been busy for the last 15 years or so making sure that you are in compliance with the Chief Financial Officers Act of 1990, the Government Performance and Review Act of 1993, and the Government Management and Reform Act of 1994. And if you haven’t already developed a Continuity of Operations Plan, or COOP, for your agency, you are surely about to do it. But in spite of all the time and resources you have devoted to compliance, if you are like most of your colleagues, when you are tossing and turning at 2:00 a.m., chances are high that you are worrying about something that is putting your agency at risk of failure.

Risk Management

We all know that risks cannot be completely eliminated. But they can be managed. And as a manager, it is up to you to make the strategic and tactical decisions on how your organization’s risks will be treated. That is, what level of risk can you accept? What risks can be deferred, what risks can be reduced, and what risks can be transferred, avoided, or controlled? Undertaking the process of identifying, analyzing, measuring, and controlling risks within your own standards of acceptability is what risk management is all about.

Risk management generally consists of three phases:

  • Identification

  • Measurement

  • Management.

Identification

Identification of risk means understanding what your risks are and their sources, including how various risks interact with each other. This requires an assessment of the external environment and internal processes and procedures. Of course, if you are in a government agency, you probably have help with this from your inspector general, outside auditors, the U.S. Government Accountability Office (GAO) or its state government equivalent, and your legislative oversight bodies. Additionally, you and your management team may have brought in your own consultants to help with this assessment.

Measurement

Measurement of risk can be accomplished using a variety of approaches. But all methods involve collecting and maintaining appropriate data. This can get to be quite an expensive activity, so it is always important to make sure that the costs of the data collection are proportional to the level of risk you are managing. Measuring and categorizing risks allows you to determine whether risks are major or minor, and following that, the level of resources you want to devote to managing the identified risks. Clearly, you want to focus first on the risks that you have deemed a high priority and that you can manage. At the opposite end are risks that are a low priority and beyond your control to manage.

Management

Management of risk is not for the faint of heart. That is, an organization’s management must make a commitment and sustained effort to develop a risk management culture. Risk management can be passive or active; regardless, it starts with policy development and assignment of responsibilities throughout the organization.

Generally, top management will develop the policies and determine the acceptable levels of risk for the organization. These policies and procedures will have to conform to any standards established by outside oversight and audit agencies, such as the U.S. Office of Management and Budget (OMB) and GAO. In addition, the program managers in your agency must find ways to accomplish their mission while still conforming to the agency’s risk management policies and procedures. Finally, a staff function is needed for collecting and analyzing data to track risk and report to management. Agency managers frequently struggle to determine the level of authority for this non-line management cadre. Giving staff personnel too much authority can lead to a perceived burden on program managers; insufficient authority will hamper the staff’s ability to collect and interpret accurate data. To the extent that data can be collected automatically and analyzed electronically, some of these tensions may be alleviated, if not eliminated.

Risk Management in the National Security Arena

Even as most governmental agencies have become fairly adept at financial risk management over the past 15 years, other agencies have been working across their traditional organizational boundaries to enhance risk management in the national security arena. The National Infrastructure Advisory Council provides the president with advice on security of critical infrastructure through the secretary of Homeland Security. A 2005 report from the Council on Risk Management1 concluded that effective risk management methods share common attributes. Among these are:

  • A mature understanding of failure mechanisms and failure indicators. In studying the Challenger space shuttle disaster, among others, the Council discovered that the National Aeronautics and Space Administration’s own assessment found, through a more mature risk analysis of the heat shield tiles, that 15 percent of the tiles represented 85 percent of the heat shield risk. Artificial time constraints compromised work on the tiles. When we couple actuarial data with such human factors, we realize that a more mature understanding of the failure mechanisms would have allowed greater focus on the highest-risk parts of the heat shield.

  • Effective use of data, including its conversion to actionable information. The Council also looked at the 9/11 Commission Report and noted that a key risk management failure was a failure to integrate data in an efficient and reliable manner, especially across the intelligence community. Rapid synthesis of information to help risk managers identify potential risks and potential means of managing the outcomes is a function of having standardized methods and the capability to deliver actionable reports.

  • Institution of a risk management culture across all levels of the organization, with a single point of accountability for risk management (e.g., a risk management officer). The inclusion of oversight functions creates a necessary bridge between risk assessment and risk management activities. The Council’s second and third overall recommendations include the creation of a departmental chief risk officer and central oversight body to accomplish what private companies do through their boards of directors.

  • Training to lessen technical and procedural human error. The Challenger disaster also revealed that lower wages led to high turnover among key construction personnel and a workforce with limited expertise. A more mature mechanism includes ongoing training.

  • A strong business case for investments in risk management. Developing a culture of risk management across an agency and prioritizing it in the budget is a practice the public sector can take from private-sector firms successful in risk management.

The National Infrastructure Advisory Council’s overall recommendation was for the government to continue its focus on risk management. With regard to the effective risk management methods listed above, it made three primary recommendations:

  1. Create and standardize risk management methodologies and mechanisms across the government.

  2. Establish a risk management leadership function within departments, bureaus, or agencies.

  3. Establish a risk management oversight function.

Private-Sector Perspectives

A 2007 survey by The Economist Intelligence Unit2 of 238 private-sector executives found that while traditional aspects of risk management, including financial and market risk, remain fundamental, reputational and human capital risks are becoming increasingly important. Fewer than half of the executives surveyed thought that their organizations were effectively managing physical security, terrorism, reputational, natural hazard, human capital, and climate change risk. Some of the barriers to effective risk management identified by these executives were lack of clarity in lines of responsibility for risk management; lack of resources and time; the difficulty of identifying and assessing emerging risks; and the threat from unknown, unforeseeable risks.

These views from the private sector are equally relevant in the public sector. Of particular note was the strongly held belief that, although support from top management was important, the key determinant of success in managing risk was a strong culture and awareness of risk throughout the organization. But how effective is creating a widespread awareness of risk unless you have the ability to accurately convert data into accurate, actionable information in a timely way? In our plugged-in world, our problems often stem from being awash in too much information, with no time to sort through it and decide how to respond. So we take shortcuts, gravitate toward information that reinforces what we think we already know, and quickly discard information that seems irrelevant. And that is how major mistakes are made.

These lessons are well known to the national intelligence community, which is populated by analysts who are trained to systematically sift through large volumes of information with strategic, operational, or tactical importance to determine the probability of future actions in specific situations. Intelligence analysis helps reduce the ambiguity of ambiguous situations. It consists of a combination of techniques designed to overcome natural cognitive biases, which are a function of the analyst’s own personality and the organizational culture.

Common Errors in Risk Management

The national intelligence community has identified several common errors made by analysts that lead to poor analysis and, at worst, major intelligence failures. Key among these errors is the analytical mindset—that is, the tendency to jump to conclusions prematurely or to be unduly swayed by a group’s mindset, which becomes increasingly likely under time pressures. Some of the most common cognitive failures include:

  • Mirror imaging, or assuming that someone else’s mindset is like your own

  • Idea fixation, or only looking for evidence that supports a preformed hypothesis (this is especially common when in a hurry)

  • Inappropriate analogies, which are made when there is insufficient knowledge about the context in which information exists or activities are occurring

  • Stovepiping, or the functional separation that occurs when various parts of an organization do not share information

  • Rational-actor hypothesis, which ascribes “rational” behavior to the other side, but with the definition of rationality coming from one’s own culture

  • Proportionality bias, or assuming that priorities are the same between different cultures—e.g., that small things are small in every culture

  • Deception, or misleading information deliberately provided to the analyst (who does not realize someone is trying to deceive him or her).

These all-too-common errors can occur in any setting in which risk is being assessed. They are not confined to the national security arena.

The good news is that a number of techniques have been developed to help analysts overcome their cognitive biases. These techniques can be applied in a variety of risk management settings. One of the best-known techniques in intelligence analysis is the structured analysis of competing hypotheses (SACH), first developed in the 1970s by retired Central Intelligence Agency veteran Richards (Dick) J. Heuer, Jr. The first step in SACH is to identify all potential hypotheses, rather than starting with a likely or preferred hypothesis. Then the analyst lists evidence and arguments for and against each hypothesis. The next step, diagnostics, involves trying to disprove as many theories as possible by creating an evidence matrix using gathered information. The findings are reviewed, and identified gaps are filled during the refinement stage. Inconsistencies are then examined, with the view that less consistency results in less likelihood of a particular hypothesis being correct. At this point, the analyst uses his or her judgment to eliminate hypotheses. A sensitivity analysis is then conducted to weigh how conclusions would be affected by inaccurate key assumptions or evidence. Finally, those responsible for the analysis present conclusions to the decisionmaker, along with a summary of alternatives that were considered and rejected. By considering multiple hypotheses and applying the evidence across all of them, many cognitive errors can be avoided.

Obtaining Accurate Intelligence

The environment in which intelligence analysis is being conducted should be conducive to obtaining accurate results. Heuer recommends that a good management system do the following:

  • “Encourage products that clearly delineate their assumptions and chains of inference and that specify the degree and source of uncertainty involved in the conclusions.

  • Support analyses that periodically re-examine key problems from the ground up in order to avoid the pitfalls of the incremental approach.

  • Emphasize procedures that expose and elaborate alternative points of view.

  • Educate consumers about the limitations as well as the capabilities of intelligence analysis; define a set of realistic expectations as a standard against which to judge analytical performance.”3

Although Heuer is specifically addressing intelligence analysis, these recommendations are important to consider when setting up a risk management structure and process.

It may be helpful to take a brief look at some domestic agencies that have been successfully using intelligence analysis to manage risk for some time. For example, the U.S. Coast Guard, in the conduct of its multi-mission maritime function, uses intelligence analysis in support of its living marine resources activities, including the establishment of specialized fishing zones and fish species analysis. The Coast Guard also uses intelligence analysis to support its environmental response to pollution by vessels, in particular the “fingerprinting” of oil residues left by vessels and pollution tracking in waterways. Other agencies, such as the Nuclear Regulatory Commission, the Department of Energy, the Treasury Department, and GAO have been known to use intelligence analysis, and it would be worthwhile to conduct case studies of these agencies to gather additional information on these uses.

Conclusion

In this time of intense scrutiny of the government’s involvement in the private sector, citizens are looking for information on what return on investment they will see as a result of the infusion of federal funds into various industry sectors. The Obama administration has promised transparency. Private-sector trends in risk management point to similar scrutiny by boards of directors, stockholders, and regulatory agencies that want to see appropriate returns and strong management of risk. For managers in the public sector, greater adoption of accurate, easily consumable information on risk would increase understanding of both risks and how future risks can be mitigated.

Getting the right information at the right time will help you determine what safeguards you need to put in place to prevent situations in which your organization is unable to complete its critical functions successfully. Creating uniform yet flexible risk management structures throughout your organization and using established risk management and intelligence analysis techniques will enable you to address and adapt to a changing environment. The risk of not managing risk is simply too high to ignore.

Discussion Questions

  1. What are the key tasks in each of the three phases of risk management: identification, measurement, and management?

  2. What are some of the common attributes of effective risk management methods?

  3. What are some of the common errors analysts make that lead to poor analysis and faulty conclusions?

  4. How does the technique of SACH help overcome such analytic biases?

  5. How can its management system increase an organization’s likelihood of obtaining accurate results?

Notes

1. National Infrastructure Advisory Council, Risk Management Approaches to Protection: Final Report and Recommendations by the Council (Washington, D.C., October 11, 2005).

2. The Economist Intelligence Unit, “Best Practice in Risk Management: A Function Comes of Age,” The Economist (May 2007). http://www.acelimited.com/NR/rdonlyres/7545D871-396C-43BF-B796-6C3BE7D4870C/0/RISK_ MANAGEMENT_290307may07.pdf (accessed September 18, 2009).

3. Center for the Study of Intelligence, Central Intelligence Agency, “Perception: Why Can’t We See What Is There To Be Seen?” Psychology of Intelligence Analysis (1999). http://www.au.af.mil/au/awc/awcgate/psych-intel/art5.html (accessed November 24, 2009).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.139.83.199