CHAPTER 5
Organisational Challenges Relating to Risk Modelling

In Chapter 1, we discussed some general contextual challenges in the implementation of robust decision-making processes, including:

  • The presence of motivational, political, cognitive or structural biases.
  • Challenges in achieving an appropriate balance of rationality and intuition.

This chapter discusses some additional challenges in the implementation of full risk modelling activities, focusing especially on issues relating to organisational structure, processes and culture. These include:

  • Beliefs that sufficient risk assessment is already being done.
  • The approaches may have been tried previously but were not found to be useful.
  • Beliefs that results of the models will not be useful, or that implementing the processes and models will create too much extra work.
  • Issues relating to the integration and alignment with existing processes, incentive systems, culture, decision accountability, organisational structures, level of centralisation and general change management issues.

5.1 “We Are Doing It Already”

Risk assessment and management is already widely used in many organisations, at least to some extent. For example:

  • Many base plans often include some consideration of risks and mitigation measures, especially when projects have some aspects that are well understood from similar previous cases.
  • Cross-functional teams are often brought together to identify risks, develop and assess mitigation actions, and assign responsibilities for further actions, especially for major projects.
  • Many larger companies have a risk management department, or an enterprise-wide risk management (“ERM”) group.

Thus, it can easily be believed that existing procedures are already sufficient, and that further formalisation or development is not necessary. There may indeed be organisations for which this is true, although it is a rare one that genuinely has all the elements in place. This section covers specific challenges in this respect.

5.1.1 “Our ERM Department Deals with Those Issues”

In principle, staff functions (such as ERM) should not be the “owners” of risk assessments of projects that relate to specific business unit activity or projects:

  • Such staff will not have the required technical knowledge about the business.
  • The business will not be properly incentivised to adapt and optimise its projects.
  • The independence of the oversight function will be lost.

Certainly, the working procedures between business units (or departments) and corporate ERM functions should not, in general, arrive at a point where business units are asking ERM questions such as: “What are the key risks to my project?” or “What are you doing to manage the risk within my project?”, although such cases do nevertheless arise in practice!

Thus, the existence of such staff functions is, in general, not sufficient to ensure that business project risks are adequately addressed.

Such staff functions do, of course, have important and valid roles, generally around:

  • Acting as facilitators of processes, and providing objectivity and challenge.
  • Acting as a centre of expertise for tools, techniques and methodologies, so that new and up-to-date processes and best practices are shared and implemented widely.
  • Providing a mechanism to identify and escalate risks that are common to different business units.
  • Providing independent objective evaluation of projects, or acting in a risk-auditing capacity.

5.1.2 “Everybody Should Just Do Their Job Anyway!”

Although the use of risk assessment principles should generally be integrated into day-to-day activities, and led by the project's business owners, doing so would nevertheless be insufficient: there may be important items that may be outside the scope or capability of an individual or small team or department, and there are many areas where there are significant challenges relating to organisation culture, processes and biases, which may be insurmountable for an individual or small group to overcome. Thus, even if one is “doing one's job”, it would still be necessary to formalise the process in some cases, especially where:

  • The projects are complex and of significant scale (meaning that risks can easily be overlooked and that input from a wide range of experts and functions is generally required).
  • Potential risk-response actions need to be escalated, agreed, authorised or communicated more widely, as their implementation is within neither the scope nor the authority of the staff performing the risk assessment.

5.1.3 “We Have Risk Registers for All Major Projects”

The use of risk registers is often a valuable step in the overall risk management process. However, risk registers are insufficient in some cases, so that full risk models are necessary. The reader is referred to the discussion at the beginning of Chapter 4 in this respect.

5.1.4 “We Run Sensitivities and Scenarios: Why Do More?”

Sensitivity and scenario analyses, and their implementation, are no doubt familiar to most readers. They are indeed a powerful tool in some contexts. These techniques, their relationship to risk and simulation, and their Excel implementation are described in detail in Chapter 6. Here, we simply note some of their key limitations:

  • A model designed around using traditional sensitivity or scenario methods (or thought processes) will typically not be able to capture the genuine risk profile (or risk scenarios) that one may be exposed to. Whilst it may be able to address some “What if?” questions, such questions are generally not able to be addressed for specific risk factors unless the model has been designed in a way that is aligned with the nature of the underlying risks; this is discussed in Chapter 7 in detail.
  • There is no explicit attempt to calculate the likelihood associated with output values or the full range of possible values (which would require multiple inputs being varied simultaneously and probabilities being attached to these variations). Thus, the decision-making basis is potentially inadequate:
    • The average outcome is not known.
    • The likelihood that a base case (or other assumed) plan can be achieved is not known.
    • It is not possible to reflect risk tolerances (nor contingency requirements) adequately in the decision.
    • It is not easy to compare one project with another.
  • They do not show a representative set of cases. For example, a base case that consists of most likely values will, in fact, typically not show the most likely value of the output; indeed the base case may be quite far from any central point of the possible range, as discussed in Chapter 4.
  • They do not distinguish between variables that are uncertain and those to be optimised, and hence may fail to highlight important decision possibilities.

5.2 “We Already Tried It, and It Showed Unrealistic Results”

Simulation techniques have been applied to business applications for several decades. Unfortunately, some experiences were not as value-added as they perhaps should have been; thus, one sometimes encounters scepticism as to their benefits. In this section, we discuss some of the issues that most frequently arise in this respect. Generally, one reason for risk models to lose credibility with senior decision-makers is when the results presented are plainly wrong, unintuitive or fail to pass the “common sense” test.

5.2.1 “All Cases Were Profitable”

It is (perhaps surprisingly) quite common for the results of risk-based simulation models to show that all possible outcomes are profitable or reasonably favourable. For example, one may have a simulation of the total discounted cash flow in a project (or of its net present value), in which all values in the possible outcome range are positive. Such results are generally unrealistic: it is unlikely in practical business situations that a project would still succeed even in the “worst case” scenario. One would be very fortunate to be involved in such a business! (Of course, the future values of a project may be positive in all cases if historic investment is not included in the calculations; here we are referring to the entire scope of a project.)

Nevertheless, when faced with such a case, one could perform a mental exercise (or group discussion) to identify what could happen in a worst case scenario (or a set of possible near-worst case or bad scenarios): if one cannot even conceive of a single scenario in which the project fails, then either one's thinking may be too narrow (i.e. not all costs or other variables, and/or risks and uncertainties have been taken into account), or it would be so intuitively clear that the project is a “no brainer” proposition and that no analysis on it should be needed at all!

In fact, almost always (with some genuinely well-intentioned and disciplined thinking), one can readily conceive of outcomes in which a project would fail (when the full set of costs are included), but it is “simply” that the model does not capture the correct behaviour of the situation.

One of the challenges (also mentioned later) is therefore an education and communication process in which higher levels of management should expect to see a range of possible outcomes in which in some cases the project is not particularly successful or even fails.

5.2.2 “The Range of Outcomes Was Too Narrow”

Another frequent observation about the output of some risk models is that the ranges generated were too narrow compared to one's intuition or historic data. Where this is the case, the credibility of the risk modelling process can be drawn into question.

There is much commonality between drivers of this and the above issue (in which a model shows that all cases are profitable):

  • The values used to populate the input risk ranges may be unrealistically narrow. In particular, the use of preset default ranges of variation (such as ±25%), rather than considering (or using) data to estimate what the true range might be, is a potential risk. For example, some cost budgets exceed their base figure by factors of five (+400%), even at the aggregate level; the possible range for individual line items may often be even wider than this. In addition, the derivation of parameters of the ranges with reference to a base case figure, rather than by separate estimation, will mean that any biases in the base figures are also reflected in the risk model.
  • Event risks may have been ignored or excluded, such as those that may only occur in 20% of cases; their inclusion would often result in “tail risk” and wider ranges.
  • Dependencies were not reflected correctly in the model; for example, if risks or their impacts in reality generally occur together or take (say) high values together, then if this is not also reflected in the model, the range for the output will be incorrectly captured (it may be too narrow or too wide depending on what is calculated and the nature of the true dependencies). Similarly, if a model is excessively detailed, then it can be challenging to capture all relevant dependencies between the items, resulting in an incorrectly estimated range (this is often a result of overlooking common underlying drivers of risk, as discussed further in Chapter 7).
  • The non-symmetry of processes is not captured, e.g. one uses a ±10% range (as is often typical in sensitivity analysis), whereas the true possible range may be from –10% to +50%.

5.3 “The Models Will Not Be Useful!”

Although there are some modelling challenges in risk contexts, in many cases, models can be built that are simple yet powerful and value-added. Of course, no model is perfect, and some are better or more useful than others (see Chapter 7 for further discussion). Nevertheless, any model can be criticised or challenged in principle, as discussed below.

5.3.1 “We Should Avoid Complicated Black Boxes!”

It is sometimes claimed that risk modelling using simulation involves creating models that are too complex for most people to understand, and that they become non-transparent “black boxes” that only the model builder (at best) can understand, with the result that the models are not reliable and cannot be reviewed or questioned by others.

It is true that risk models are in some ways more complex than static models:

  • There may be additional areas of knowledge or capability that are necessary to learn, such as statistical and probabilistic concepts, their application and interpretation, as well as (perhaps) modelling techniques, e.g. more advanced use of Excel lookup functions or VBA coding, in order to be able to capture the dynamic logic that is often (ideally) required in risk models. The nature and extent of the required knowledge will, of course, depend on one's role in the process (modelling analyst, technical expert or decision-maker).
  • The input area is usually larger than for the corresponding static model; it will generally contain the parameters for the distributions or ranges, as well as a base case. In addition, it may need to reflect all items and parameters associated with the relevant decisions (risk mitigation and response decisions, and general project decisions).

On the other hand, both the process of identifying risk and of reflecting these in a model should create more transparency, not less: for example, the cross-functional inputs, more formal processes and separation of risk ranges from base cases are steps that all shed light on potential incorrect logic or biases.

5.3.2 “All Models Are Wrong, Especially Risk Models!”

Any form of modelling (whether risk modelling or static modelling) has some inherent ambiguity to it: models are simplifications of reality (and hence always “wrong”, otherwise known as “model error”), and are effectively statements of hypothesis about the nature of reality. Thus, in order to build a model with reasonable accuracy, one has to understand a situation to some degree (key drivers, interactions, etc.); on the other hand, where a situation is understood perfectly, a model would not be necessary. There is an important exploratory component to model building that is often underestimated or overlooked: the process of building a model can generate new insight and intuition, and thus help to achieve a balanced (and aligned) rationality and intuition in a decision process.

Of course, there are cases where the output of models is relied on too much, with insufficient consideration given to factors that cannot generally be included explicitly within models: typically every model is valid only with an assumed context, usually implicitly, and non-documented. For example, most financial models implicitly assume that liquidity traps do not happen, and that refinancing will always exist as a possibility. Some major financial failures are arguably linked to such issues (such as the 2008 Financial Crisis, or the failure in 1998 of the hedge fund Long Term Capital Management).

In principle, there is generally no real difference between building either a useful risk model or a useful traditional static model, and there will be situations in practice in which useful risk models cannot be built. This issue is explored in Chapter 7 in more detail.

5.3.3 “Can You Prove that It Even Works?”

The identification of appropriate risk-response measures is a fairly tangible benefit of a risk assessment process. On the other hand, some of the benefits of such processes are less tangible: it is not easy to “prove” that any particular decision is correct or not (except in extreme cases, for example, where all outcomes are not desirable ones). Indeed, one may make a rationally-based decision, with an optimal risk profile, but still find that the occurrence of residual risks leads to an unfavourable outcome. Thus, a major challenge is to distinguish a good decision from a good outcome (and similarly a poor decision from a poor outcome).

Despite the fact that many organisations have introduced risk analysis into their decision processes and have more confidence in their decisions, finding robust evidence that the decisions are better (or that such organisations even perform better) is not easy:

  • It would be insufficient to compare the performance of organisations that use risk assessment methods with that of organisations that do not. The real test is whether a specific organisation becomes “more effective and successful” than it otherwise would have been: thus, one would have to compare the performance of the same organisation with itself (in the cases where risk assessment was and was not used), which in practice cannot really be done.
  • One can think of an analogy in which one aims to assess whether people receiving health care treatment are in better health than those who are not currently receiving any: at the (macro) level of the population, people who receive treatment are likely to be less healthy than people who are not receiving treatment (there will be some exceptions at the individual level, of course). However, at a micro level, an individual receiving treatment is generally in better health after the treatment than beforehand. Thus, the effectiveness of the treatment can only be observed by micro studies (e.g. clinical trials with many patients in the stages of drug development, and then specific patient studies once a treatment is on the market). However, the analogous studies are much harder to do in real-life business contexts.

5.3.4 “Why Bother to Plan Things that Might Not Even Happen?”

One objection that is sometimes made about risk assessment is that one is “planning to lose”: a risk (especially an event risk) may not even occur, whereas the cost of mitigation (e.g. of reducing their likelihood or their impact) is a definite cost if the measure is implemented:

  • An attitude can exist that it may be better not to spend time and resources discussing, identifying and mitigating items that may not even happen, but rather to simply wait for them to (perhaps) happen and then deal with the consequences: why incur additional cost to deal with an issue that might not even arise?
  • Such an attitude is often reinforced by accountability-related issues: when a risk does materialise, one may argue that “it could not have been foreseen” or “we were just unlucky” or that “someone (else) didn't do their job”, or the “external environment changed unexpectedly”.

Thus, it may be organisationally more credible not to spend money on risk mitigation (or delay a project until more information is available) than it would be to spend money to reduce the probability of something that may not happen anyway. Hence, someone opposed to the implementation of risk assessment or mitigation measures may be able to position themselves as acting in the interests of the organisation (keeping costs low, implementing projects quickly, etc.). If such risks do occur, they may also be able not only to shield themselves from any blame but also to capitalise on the occurrence by, at that point, taking a leading role in dealing with the consequences, and thus being perceived as a person of action.

5.4 Working Effectively with Enhanced Processes and Procedures

In general, the implementation of a formal risk assessment process requires additional work, for example as a minimum:

  • The more precise requirements concerning risk definitions that are needed for quantitative approaches mean that some additional iterations of process stages are typically required.
  • It may lead to the identification of areas where additional information or research is needed, or where supplementary data or external expertise are required, or raise issues that need further internal communication or authorisation.
  • One will generally be required to seek input from a wider set of staff (such as cross-functional teams) to identify risks and assess mitigation actions.

5.4.1 Selecting the Right Projects, Approach and Decision Stage

Clearly, it is important that any effort put into the process has a payback in terms of the value generated: just as it would not make sense to perform an elaborate risk assessment on a simple situation, so it would be unwise not to do so for large, complex projects. In fact, for many projects, the incremental effort and investment would represent a very small proportion of the total project investment, especially if the process is organised efficiently. Thus, the more significant the need for a risk assessment, the more formalised and sophisticated should be the approach chosen.

In practice, any particular organisation would likely need to develop criteria (such as project size limits) in order to decide which approach to risk assessment is most suitable for a particular project. Such criteria may be able to be aligned to some extent with those used within existing processes (such as those that define when a project needs Board-level approval).

Most large organisations have “gating” systems for authorising projects, with projects required to pass through a series of gates (or hurdles) before definitive approval is granted, and in which investments and other resources are committed:

  • The early steps (prior to the passing of a project through the gating system) will typically be a non-formalised process involving a range of discussions and plans concerning a potential project and the approximate benefits and costs of doing so. Different options and strategic alternatives may be discussed and compared at this stage.
  • Once a project appears to become more appropriate for further pursuit, the gating system may be entered into (the precise criteria and level of detail required to pass from one gate to the next will depend on the specific organisation and business context):
    • The formal passing of a project into the gating system will be a signal (and perhaps a requirement) that a wider set of staff should become involved.
    • A budget may need to be authorised to cover the costs of more detailed planning processes, and some resources may need to be formally committed to analysing the project in more detail.
    • As the project moves through the gates, increased research, work and general planning will be required, for example in areas such as market research, product development, engineering designs, production and manufacturing planning, relationships with partners, preliminary coordination with third parties, government agencies, infrastructure planning, design and construction planning activities, and so on.

The challenge in this respect is that there are competing forces at work in terms of finding the right balance between using risk assessment too early and too late in the process (and in what form to use it at each stage).

Reasons in favour of using the techniques earlier in the process include:

  • It has an important role to play in optimising the design and structure of projects, and the setting of appropriate objectives, targets and contingencies for them.
  • To do so conforms to the established management practice of aiming to “work smarter” in the earlier (upstream) stages of multi-stage projects and decision processes.
  • To take an extreme case, clearly for a large and complex project it would not make sense to consider risks only after a project has been authorised (or even at the last stage prior to authorisation), which may result in having to reject projects after much development work, time and cost have been invested.

One reason to use the techniques only at later stages is that a full and detailed quantitative model on every project option from the very earliest stages of the consideration of these options would not be practical: this creates extra work, and could disrupt and damage some of the more creative and flexible processes that are required at the earlier stages, such as ensuring that a complete and varied set of decision options is being considered.

5.4.2 Managing Participant Expectations

The implementation of risk assessment will typically involve the participation of multiple stakeholders and experts, including cross-functional resources. It is generally important that expectations are set early on about the process and time commitments that are likely to be required. If not, the situation can arise where process participants may feel that their contribution to the cross-functional risk assessment is complete once risks have been identified, and mitigation plans put in place. In fact, their input is likely to be needed at later stages, especially in quantitative approaches; thus, participants may become frustrated (or are unwilling to cooperate to the fullest extent of their capabilities), as they are asked to provide input on aspects that they feel have already been completed, and an impression of a lack of direction or organisation can result. Specifically, it is important to be clear that:

  • Risk assessment should be an inherent part of all project activities.
  • The process is, by nature, iterative.
  • As the project develops, additional deliverables are needed:
    • The various stages of an informal and formal gating system will have different requirements.
    • The approaches used, and the precision required concerning risk definitions, will likely change; especially as one moves from qualitative line item risk management to full risk modelling.
    • Objectives may change, even if they have been clearly defined at the beginning. For example, once some results of a risk assessment have been seen by senior management, it may awaken a desire to demand more, such as creating full risk models. This can mean that the revisiting of process stages becomes necessary, despite best efforts to have clarified the objectives early on.
  • Participants are very likely to have to provide additional inputs and make additional assumptions as the process progresses and results become available.

5.4.3 Standardisation of Processes and Models

One key challenge is to decide which parts of the process should be standardised and which should be left more open. This is especially relevant to risk quantification and the building of models:

  • In principle, one needs to allow for flexibility and creativity in the analysis, in order to be able to reflect the true risk structure of the situation at hand, which may be one that is complex and will typically change from project to project. The challenge in this respect is to ensure that those participants who are tasked to build the associated quantitative models indeed have the right skills and capabilities to do so.
  • On the other hand, especially when the introduction of risk assessment is in its early stages, it may be more effective to work to standard templates (especially for those participants who are less familiar with the concepts), whilst the creation of common understandings, formats and tools is being generated. The challenge here is to avoid “box-ticking” exercises, or work that is of low value-added.

Certainly, once risk assessment is deeply embedded within an organisation, its processes and culture, the scope for more creative approaches should be readily available; on the other hand, to reach this stage, one must not oversimplify to the point that there is little perceived value-added.

5.5 Management Processes, Culture and Change Management

Perhaps the biggest (but often overlooked) challenge in achieving successful implementation of formalised risk assessment processes (especially ones that use aggregation and full modelling approaches) is the need for a change in management practices at many levels. Some of these are explored in this section.

5.5.1 Integration with Decision Processes

It is probably fair to say that some risk assessment activities already take place within many existing business projects, but where results are not communicated explicitly to higher levels of management:

  • Where an individual or project team uses a risk assessment as a tool to support project design (as discussed in Chapters 1 and 2), then the results may be directly and implicitly reflected in modified base projects. The explicit details of the assessment may never be discussed with higher management in detail.
  • By necessity, upward communication is of a summarised nature, focusing on key points, and if risks are considered unimportant in a particular project, then this may not be an area of focus.
  • In many organisations, there is a strong cultural norm that it is almost always best only to communicate positive messages upwards; as such, risks are not usually near the top of the discussion list (one may nevertheless on occasion be able to frame a discussion of risks in positive terms).

However, in general, the use of risk assessment will not achieve its full benefit unless there is more explicit communication with management, with the results used as a core basis for decision-making:

  • To achieve fair comparability between projects, the cost and benefits of risk-response actions would need to be included in all projects (to avoid favouring projects that ignore or understate some or all of their key risks).
  • Typically, the authorisation and implementation of many key risk-response measures would require higher-level management involvement:
    • Additional budget or cross-functional activities may be required, or modifications to targets such as project completion dates may need to be considered.
    • A final decision on project authorisation needs to reflect the cost and benefit of such measures, and so may need to occur later than the time at which the measures were authorised.
    • In order to reflect risk tolerances in decision-making in a structured way, at least some communication of the likely range of output will be needed.

5.5.2 Ensuring Alignment of Risk Assessment and Modelling Processes

In general, there will be some form of specialisation of activities within a risk assessment project; that is, a variety of technical and project specialists will be involved in one-on-one and group discussions, as well as in other cross-functional activities (as discussed in Chapter 2); there may also be a team member assigned specifically to the modelling activities.

It is clear that the building of a model without proper knowledge of the risks will likely lead to one that is inappropriate to address the needs of a project team and of decision-makers. It is highly unlikely that a model built without appropriate alignment with the general risk assessment process would contain the right variables, be built at the right level of detail, or have formulae that are flexible enough to include the effect of risks or their impacts within them, or to allow additional risks to be readily incorporated within them.

On the other hand, it can be challenging for a modelling analyst to receive the required inputs from a general risk assessment team:

  • There are many activities within a general risk assessment process that do not require any specific inputs from quantitative risk modelling activities. In particular, many aspects of risk identification and mitigation planning may not require any input from modelling or quantification activities.
  • Many established risk management practices do not require specific modelling or aggregation activities (see Chapter 3), so that even experienced risk assessment practitioners may have a lack of exposure to the issues required to be addressed in risk modelling contexts.
  • The modelling analyst will often be perceived as an addition to the team, rather than a core part of it, and is likely also to be junior to many of the process participants in terms of organisational hierarchy and authority.

Thus, it is often the case that a general process team (that is focused on risk mitigation management) may have less perceived need or incentive to alter the nature of its activities to accommodate the needs of modelling activities.

There is, therefore, the possibility that activities that are undertaken within a general risk assessment process are not properly reflected in the modelling activities, due to lack of communication or of joint incentives (or responsibility) for the overall output of the process.

This is not to say that teams conducting general risk assessment activities have no incentive to interact appropriately with modelling activities. However, such incentives are inherently much weaker than is the incentive of the modelling analyst to have appropriate input from the general process. Indeed, a general risk assessment team would have some incentives in this respect because it is also the case that a process that is conducted in isolation of the modelling activities will also be conducted inefficiently to some extent:

  • There would generally not be a correspondence between the outputs of the general risk assessment process (such as identified risks) and the quantitative model, which can act to reduce the credibility of both processes.
  • A project team that plans activities and mitigation measures, but whose required budget and resource requirements are not reflected in the model, may find that such measures are not available for implementation once a project has been authorised (or the project expenditure will then exceed the planned amount, without risks even materialising, as the base planned figure would be lower).
  • There is likely to be the need for significant process rework if decision-makers wish to see a clear linkage between the output of a model and the general risk assessment process. For example, as discussed in Chapter 3, the risk assessment process will not be conducted with clear objectives as to the requirements for risk definitions (e.g. to avoid double-counting, overlaps, capture dependencies, etc.), and so many fundamental process steps may need to be revisited in order to identify risks in the appropriate way.

Thus, where a quantitative risk model is required as an output of the general process, then the success of each will be determined by a close coordination and alignment between the processes. Nevertheless, the (typically more senior) general process participants generally have a much weaker inherent need for such alignment of activities than does a modelling analyst.

Thus, it is incumbent on process leaders and senior management to define the approach that is to be used with respect to risk quantification, and to ensure that all participants in the process are appropriately engaged and have clear objectives as to the role of, and need for, quantified risk modelling.

5.5.3 Implement from the Bottom Up or the Top Down?

Of course, any successful implementation needs to have significant genuine “bottom-up” support. In addition, it is probably fair to say that management's role must be much more proactive than just supporting any bottom-up initiatives; senior management needs to drive the implementation to make it happen. With the multitude of challenges that relate to organisational processes and culture, a strong “top-down” implementation is almost always necessary (if the use and acceptance of risk modelling is to extend widely into the organisation, and not be conducted only by a few specific individuals). Many of the challenges would otherwise be insurmountable for individuals or small groups of like-minded people, and others would create additional difficulties that in their totality may also be too challenging to overcome.

Examples of ways in which implementation can be driven top down include:

  • Ensuring that the objectives of each risk assessment project are made clear. In particular, where quantitative modelling activities are desired, process participants will need to provide additional (or modified) inputs into the process, as discussed earlier.
  • Laying out (compulsory) guidelines as to what outputs of risk models are required at each stage of a gating system or decision process.
  • Including risk-based metrics in required medium- and long-term plans, and in some key performance indicators or incentives.
  • “Walking the talk”: providing budgets, human resources and training to enable participants to implement the extra work and new techniques that are generally required, and allowing time for such analysis within planning timeframes.
  • Rewarding proactive bottom-up initiatives (and being seen to do so). Staff who proactively drive these processes can be given rewards in one form or another; this can range from informal “prize-giving” to formalised recognition (with incentive, career development and promotion systems), as well as the granting of leeway to allow for some mistake-driven learning.

5.5.4 Encouraging Issues to Be Escalated: Don't Shoot the Messenger!

Where cultural norms mean that lower-level staff generally prefer to present an optimistic story to higher level management, it is easy for issues to become hidden until they are critical (or too late), and for assumptions to be too optimistic.

Indeed, attitudes from management such as “bring me solutions, not problems” are common:

  • Such attitudes may be justified, for example where management feels that the lower-level staff have not given enough genuine thought to the situation, and wish them to redouble their efforts.
  • In other cases, where a project team has done all within its power to develop risk-response measures and need more senior-level authorisation for them to be implemented, such attitudes are not generally justified.

Whilst it may be advisable for lower-level staff (as far as possible) to make an effort to frame their communication in positive terms (e.g. “uncertainty” may be less negative than “risk” and “opportunities to improve project performance” are the flip side of risk-mitigation or risk-response measures), there is a responsibility on all layers of management to achieve appropriate levels of delegation for project responsibility, whilst being open to risks being escalated. For example, management may communicate that they expect to see a range of possible outcomes in which in some cases the project is not as successful as desired or fails, and to see the causal factors of this. Additionally, management must be willing to “roll up one's sleeves” to support the appropriate risk-responses, and not “shoot the messenger”.

5.5.5 Sharing Accountability for Poor Decisions

Most organisations (apart from perhaps the smallest) have decision-makers who are separate from other key process participants. Specifically, there is very often a separation between senior management and those who provide the information on which management make decisions, as well as shareholders and other important stakeholders.

This separation can hinder the creation of a desire to base decisions using objective methods, such as robust quantitative risk assessment, and rather to rely on their own knowledge, judgements, intuition and biases. In particular, one can surmise that as long as the costs of poor outcomes are borne disproportionately by others, and that any particular decision-making basis is widely accepted as being appropriate (so that its use is not regarded as a weakness), then the accuracy or validity of that basis is largely irrelevant. The incentives for decision-makers to make the genuinely right decision can be outweighed by other factors. Cases such as the widespread acceptance of “static” forecasts (as well as the Financial Crisis of the early 21st century) are perhaps driven by such issues.

In particular, the use of static forecasting methodology (which provides a forecast that is essentially always wrong, despite being widely accepted as a methodology) may be preferred by decision-makers and also create a justification for the use of biases, intuition and personal preferences in the decision process: if the project in question is implemented, then two cases will arise:

  • The project goes well (in the sense that the aggregate risk occurrence is not particularly unfavourable); everyone is content, and the favourable outcome will be attributed to good decision-making, skill and competent project delivery.
  • The project goes badly. In this case, the outcome can be blamed on a poor forecast (amongst other factors), as a static forecast would not have generally reflected such a case.

On the other hand, if a full and robust risk assessment had been conducted prior to the project, then this risk assessment would have shown a mix of outcomes, some positive and some negative. If an unfavourable outcome occurs (due to a risk scenario that was represented in the model), then the attribution to a poor forecast would not really be possible.

Thus, to some extent, the embedding of risk assessment approaches within management decision-making creates a measure of transfer of accountability for bad outcomes to decision-makers in a way that would have been less clear without it; this can create both a challenge and a benefit.

5.5.6 Ensuring Alignment with Incentives and Incentive Systems

One of the key organisational challenges in implementing risk-based planning processes is to ensure that the conflict with incentive systems is minimised.

Most situations involving risk contain a mixture of controllable and non-controllable items: the identification and implementation of risk-response measures is controllable, whereas – once such measures are implemented – the actual occurrence or extent of impact of a risk or uncertainty is not. In other words, the response measure will (for example) alter the likelihood or the impact of a risk or uncertainty, but not change the fundamental situation as to whether (or with what impact) a risk or uncertainty will arise.

Implicit in setting (appropriate) incentives is that one has some control over the aspect of the situation that is being incentivised. Indeed, if incentives were set in a situation in which there were truly no control, the effect of doing so may have a negative consequence; it could lead to feelings of unfairness when some staff are awarded bonuses whilst others (perhaps more capable or who worked harder) were not.

Broadly speaking, one may distinguish between two categories of incentives:

  • Process- (or activity-) based incentives are not related to achieving some target objective, but to the processes that are judged important to doing so. For example, a sales manager may be awarded a bonus for making more than a specified number of new customer contacts (irrespective of the actual level of sales achieved in the short term); of course, there may be a long-term strategy behind such thinking. The outcome, even if bad (but within the originally predicated range), would then not influence incentive awards as long as all relevant process and decision stages were correctly followed: if a thorough risk assessment showed that most (but not all) possible outcomes for a future project were good, but on implementation a poor outcome arises due to “non-controllable” factors that had been identified within the risk assessment, then the decision to proceed was not necessarily a poor one.
  • Outcome-based incentives, which are based around an outcome being achieved, i.e. a decision is judged by its outcome, not by whether it was fundamentally a good one. For example, a sales manager may be awarded a bonus if he or she achieves some specified sales target.

Many managers have a (deep-seated) intuition to prefer outcome-driven incentives. This is probably driven by the fact that many situations are so complex that it is not possible to define all the scenarios that might happen. For example, the market may be changing so quickly with many possible new product innovations or competitor entries (within the timeframe applicable to the incentive system) that a process-based system may track activities that are not generating benefit. On the other hand, an outcome-based incentive could encourage the staff to respond dynamically in an appropriate way, as they find innovative ways to reach their targets, even if the nature of such adaptations cannot be foreseen in advance.

One can argue that in the presence of risk, a process-based incentive has some role, because some elements of the outcome are simply beyond the control of participants, even if they act in the best conceivable manner; it would be unfair to punish genuine bad luck (or to reward genuine good luck). Part of a process-based incentive scheme could therefore simply be to ensure that risk analysis is being used at the appropriate time by the appropriate staff.

One may be drawn to the conclusion that both approaches are necessary in general, with some form of weighting towards one or the other.

5.5.7 Allocation and Ownership of Contingency Budgets

One of the most important applications of aggregate risk assessment is the calculation of the required contingency to include in a plan, as discussed in Chapter 4. In principle, with an appropriate risk model, the calculation of a contingency amount at the aggregate project level is straightforward. For example, one may budget with reference to the P75 of a project's costs (with contingency being the difference between this figure and a base case).

In practice, once contingency amounts are determined, the issue often arises as to whether (or how) to allocate this budget to the underlying individual components. In other words, the total figure is calculated by reflecting the uncertainty within each subproject or project task (or department, geographical area, business unit, project phase, time period, etc.) and aggregating these together, so that one may expect that this resulting contingency be allocated back to them.

The challenge in doing so relates directly to the discussion in Chapter 4, in which we showed (e.g. Figure 4.19) that in general (due to the diversification effect) there is not a linear mapping between the aggregate percentile and that of the individual components. Thus, for example, in the case of the model with five items (Figure 4.21), the P75 of the output resulted in a case when inputs were set at (approximately) their P60 values. Hence, the allocation of an aggregate P75 budget (so that each component has its “fair” proportion of it) would result in a situation where each component would exceed its budget in 40% of cases (whereas the aggregate project would only do so in 25% of cases). Such an amount may be regarded as insufficient by the staff responsible for that particular component; they may each desire a P75 budget for their component, which would lead to the total budget being around its P95 point.

Thus, there is an inherent conflict, with the possibility of resource hoarding and excess contingency (as discussed in Chapter 4). There is no perfect solution to this, as it is inherent in the nature of the diversification and combinatorial effect associated with uncertainty; thus, it is an organisational challenge.

One may consider centralising budgets at an aggregate project level, with a variety of degrees of strictness:

  • Hold all budgets centrally.
  • Give each component its base budget (with no contingency); any overspend would have to be authorised centrally, and perhaps only if the reasons for needed overspend relate to the materialisation of a previously-identified risk (in order to ensure sufficient incentive for adequately accurate planning at the component level).
  • Give each component some contingency, whilst also maintaining some centrally. This may be preferable from a general management perspective, in order to give the task managers some freedom of action, and to reduce the transaction costs of the frequent communication with central contingency functions that would be required, even for small amounts of overspend.

Clearly, such choices may have profound implications for organisational responsibilities, structures, authorisation and other processes, and generally for the fundamental relationship of power and control within organisations, and thus present potentially significant challenges.

5.5.8 Developing Risk Cultures and Other Change Management Challenges

The issues discussed in this chapter clearly show a multitude of challenges that need to be addressed in order to successfully achieve the benefits of risk assessment processes, especially those associated with full risk modelling. These typically involve changes to many areas, including:

  • Management decision processes, attitudes and leadership.
  • Responsibilities, incentive systems, structure and authorisation processes.
  • Communication and training (e.g. on quantitative risk modelling and process modifications).

One may aim to install a more risk-aware culture, with characteristics such as:

  • The embedding of risk analysis results in decision-making processes (for example, creating a cultural non-acceptance of the presentation of static forecasts, or of ones in which high quality risk assessments are not conducted).
  • A general desire at all levels for openness, transparency and rigour, and objectivity in discussions, plans and decision-making processes.
  • High levels of competence, good corporate governance and leadership.
  • Encouragement of open debate to question and challenge ingrained thinking, and an acceptance of a diversity of opinion.
  • Room for curiosity, creativity and problem-solving, a desire to innovate, to make mistakes, to learn and to improve.
  • A feeling of shared responsibility and accountability, nurtured by management example, behaviour, leadership and the appropriate structure to incentive systems.

In principle, change management frameworks and tools can support this process. Indeed, the well-known Kübler–Ross model (for stages of grief) may be adapted as if it were to apply to the general change processes associated with the implementation of risk assessment:

  • Denial: “We don't need this/it is not happening/it won't create any benefit.”
  • Anger: “Why have we let ourselves into this situation? Who is to blame?”
  • Bargaining: “OK, how are we going to move forward? What do we need to do/what investments are to be made/what trade-offs might there be?”
  • Depression: “This is harder than we thought. We are not getting anywhere.”
  • Acceptance: “We have made a breakthrough, we can see the benefits. We need to keep pushing ahead, to accept that it will take time and we'll make some mistakes.”

As the framework shows, there are typically different phases, and these require patience, management of the fear of failure (and of failure itself), encouragement and leadership. Therefore, an important element of achieving success (in terms of widespread implementation and cultural change) is to have a strong top-down implementation, which acts as a catalyst to bottom-up activities.

The use of simulation tools (especially if they are user-friendly) such as @RISK can often provide fairly simple ways to demonstrate the benefits, create some practical results and generally support the process of cultural change. Of course, they are only a small part of the overall process, and are not a “silver bullet”, nor a substitute for the myriad of other changes required for a truly successful implementation.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
52.15.161.188