In Chapter 1, we discussed some general contextual challenges in the implementation of robust decision-making processes, including:
This chapter discusses some additional challenges in the implementation of full risk modelling activities, focusing especially on issues relating to organisational structure, processes and culture. These include:
Risk assessment and management is already widely used in many organisations, at least to some extent. For example:
Thus, it can easily be believed that existing procedures are already sufficient, and that further formalisation or development is not necessary. There may indeed be organisations for which this is true, although it is a rare one that genuinely has all the elements in place. This section covers specific challenges in this respect.
In principle, staff functions (such as ERM) should not be the “owners” of risk assessments of projects that relate to specific business unit activity or projects:
Certainly, the working procedures between business units (or departments) and corporate ERM functions should not, in general, arrive at a point where business units are asking ERM questions such as: “What are the key risks to my project?” or “What are you doing to manage the risk within my project?”, although such cases do nevertheless arise in practice!
Thus, the existence of such staff functions is, in general, not sufficient to ensure that business project risks are adequately addressed.
Such staff functions do, of course, have important and valid roles, generally around:
Although the use of risk assessment principles should generally be integrated into day-to-day activities, and led by the project's business owners, doing so would nevertheless be insufficient: there may be important items that may be outside the scope or capability of an individual or small team or department, and there are many areas where there are significant challenges relating to organisation culture, processes and biases, which may be insurmountable for an individual or small group to overcome. Thus, even if one is “doing one's job”, it would still be necessary to formalise the process in some cases, especially where:
The use of risk registers is often a valuable step in the overall risk management process. However, risk registers are insufficient in some cases, so that full risk models are necessary. The reader is referred to the discussion at the beginning of Chapter 4 in this respect.
Sensitivity and scenario analyses, and their implementation, are no doubt familiar to most readers. They are indeed a powerful tool in some contexts. These techniques, their relationship to risk and simulation, and their Excel implementation are described in detail in Chapter 6. Here, we simply note some of their key limitations:
Simulation techniques have been applied to business applications for several decades. Unfortunately, some experiences were not as value-added as they perhaps should have been; thus, one sometimes encounters scepticism as to their benefits. In this section, we discuss some of the issues that most frequently arise in this respect. Generally, one reason for risk models to lose credibility with senior decision-makers is when the results presented are plainly wrong, unintuitive or fail to pass the “common sense” test.
It is (perhaps surprisingly) quite common for the results of risk-based simulation models to show that all possible outcomes are profitable or reasonably favourable. For example, one may have a simulation of the total discounted cash flow in a project (or of its net present value), in which all values in the possible outcome range are positive. Such results are generally unrealistic: it is unlikely in practical business situations that a project would still succeed even in the “worst case” scenario. One would be very fortunate to be involved in such a business! (Of course, the future values of a project may be positive in all cases if historic investment is not included in the calculations; here we are referring to the entire scope of a project.)
Nevertheless, when faced with such a case, one could perform a mental exercise (or group discussion) to identify what could happen in a worst case scenario (or a set of possible near-worst case or bad scenarios): if one cannot even conceive of a single scenario in which the project fails, then either one's thinking may be too narrow (i.e. not all costs or other variables, and/or risks and uncertainties have been taken into account), or it would be so intuitively clear that the project is a “no brainer” proposition and that no analysis on it should be needed at all!
In fact, almost always (with some genuinely well-intentioned and disciplined thinking), one can readily conceive of outcomes in which a project would fail (when the full set of costs are included), but it is “simply” that the model does not capture the correct behaviour of the situation.
One of the challenges (also mentioned later) is therefore an education and communication process in which higher levels of management should expect to see a range of possible outcomes in which in some cases the project is not particularly successful or even fails.
Another frequent observation about the output of some risk models is that the ranges generated were too narrow compared to one's intuition or historic data. Where this is the case, the credibility of the risk modelling process can be drawn into question.
There is much commonality between drivers of this and the above issue (in which a model shows that all cases are profitable):
Although there are some modelling challenges in risk contexts, in many cases, models can be built that are simple yet powerful and value-added. Of course, no model is perfect, and some are better or more useful than others (see Chapter 7 for further discussion). Nevertheless, any model can be criticised or challenged in principle, as discussed below.
It is sometimes claimed that risk modelling using simulation involves creating models that are too complex for most people to understand, and that they become non-transparent “black boxes” that only the model builder (at best) can understand, with the result that the models are not reliable and cannot be reviewed or questioned by others.
It is true that risk models are in some ways more complex than static models:
On the other hand, both the process of identifying risk and of reflecting these in a model should create more transparency, not less: for example, the cross-functional inputs, more formal processes and separation of risk ranges from base cases are steps that all shed light on potential incorrect logic or biases.
Any form of modelling (whether risk modelling or static modelling) has some inherent ambiguity to it: models are simplifications of reality (and hence always “wrong”, otherwise known as “model error”), and are effectively statements of hypothesis about the nature of reality. Thus, in order to build a model with reasonable accuracy, one has to understand a situation to some degree (key drivers, interactions, etc.); on the other hand, where a situation is understood perfectly, a model would not be necessary. There is an important exploratory component to model building that is often underestimated or overlooked: the process of building a model can generate new insight and intuition, and thus help to achieve a balanced (and aligned) rationality and intuition in a decision process.
Of course, there are cases where the output of models is relied on too much, with insufficient consideration given to factors that cannot generally be included explicitly within models: typically every model is valid only with an assumed context, usually implicitly, and non-documented. For example, most financial models implicitly assume that liquidity traps do not happen, and that refinancing will always exist as a possibility. Some major financial failures are arguably linked to such issues (such as the 2008 Financial Crisis, or the failure in 1998 of the hedge fund Long Term Capital Management).
In principle, there is generally no real difference between building either a useful risk model or a useful traditional static model, and there will be situations in practice in which useful risk models cannot be built. This issue is explored in Chapter 7 in more detail.
The identification of appropriate risk-response measures is a fairly tangible benefit of a risk assessment process. On the other hand, some of the benefits of such processes are less tangible: it is not easy to “prove” that any particular decision is correct or not (except in extreme cases, for example, where all outcomes are not desirable ones). Indeed, one may make a rationally-based decision, with an optimal risk profile, but still find that the occurrence of residual risks leads to an unfavourable outcome. Thus, a major challenge is to distinguish a good decision from a good outcome (and similarly a poor decision from a poor outcome).
Despite the fact that many organisations have introduced risk analysis into their decision processes and have more confidence in their decisions, finding robust evidence that the decisions are better (or that such organisations even perform better) is not easy:
One objection that is sometimes made about risk assessment is that one is “planning to lose”: a risk (especially an event risk) may not even occur, whereas the cost of mitigation (e.g. of reducing their likelihood or their impact) is a definite cost if the measure is implemented:
Thus, it may be organisationally more credible not to spend money on risk mitigation (or delay a project until more information is available) than it would be to spend money to reduce the probability of something that may not happen anyway. Hence, someone opposed to the implementation of risk assessment or mitigation measures may be able to position themselves as acting in the interests of the organisation (keeping costs low, implementing projects quickly, etc.). If such risks do occur, they may also be able not only to shield themselves from any blame but also to capitalise on the occurrence by, at that point, taking a leading role in dealing with the consequences, and thus being perceived as a person of action.
In general, the implementation of a formal risk assessment process requires additional work, for example as a minimum:
Clearly, it is important that any effort put into the process has a payback in terms of the value generated: just as it would not make sense to perform an elaborate risk assessment on a simple situation, so it would be unwise not to do so for large, complex projects. In fact, for many projects, the incremental effort and investment would represent a very small proportion of the total project investment, especially if the process is organised efficiently. Thus, the more significant the need for a risk assessment, the more formalised and sophisticated should be the approach chosen.
In practice, any particular organisation would likely need to develop criteria (such as project size limits) in order to decide which approach to risk assessment is most suitable for a particular project. Such criteria may be able to be aligned to some extent with those used within existing processes (such as those that define when a project needs Board-level approval).
Most large organisations have “gating” systems for authorising projects, with projects required to pass through a series of gates (or hurdles) before definitive approval is granted, and in which investments and other resources are committed:
The challenge in this respect is that there are competing forces at work in terms of finding the right balance between using risk assessment too early and too late in the process (and in what form to use it at each stage).
Reasons in favour of using the techniques earlier in the process include:
One reason to use the techniques only at later stages is that a full and detailed quantitative model on every project option from the very earliest stages of the consideration of these options would not be practical: this creates extra work, and could disrupt and damage some of the more creative and flexible processes that are required at the earlier stages, such as ensuring that a complete and varied set of decision options is being considered.
The implementation of risk assessment will typically involve the participation of multiple stakeholders and experts, including cross-functional resources. It is generally important that expectations are set early on about the process and time commitments that are likely to be required. If not, the situation can arise where process participants may feel that their contribution to the cross-functional risk assessment is complete once risks have been identified, and mitigation plans put in place. In fact, their input is likely to be needed at later stages, especially in quantitative approaches; thus, participants may become frustrated (or are unwilling to cooperate to the fullest extent of their capabilities), as they are asked to provide input on aspects that they feel have already been completed, and an impression of a lack of direction or organisation can result. Specifically, it is important to be clear that:
One key challenge is to decide which parts of the process should be standardised and which should be left more open. This is especially relevant to risk quantification and the building of models:
Certainly, once risk assessment is deeply embedded within an organisation, its processes and culture, the scope for more creative approaches should be readily available; on the other hand, to reach this stage, one must not oversimplify to the point that there is little perceived value-added.
Perhaps the biggest (but often overlooked) challenge in achieving successful implementation of formalised risk assessment processes (especially ones that use aggregation and full modelling approaches) is the need for a change in management practices at many levels. Some of these are explored in this section.
It is probably fair to say that some risk assessment activities already take place within many existing business projects, but where results are not communicated explicitly to higher levels of management:
However, in general, the use of risk assessment will not achieve its full benefit unless there is more explicit communication with management, with the results used as a core basis for decision-making:
In general, there will be some form of specialisation of activities within a risk assessment project; that is, a variety of technical and project specialists will be involved in one-on-one and group discussions, as well as in other cross-functional activities (as discussed in Chapter 2); there may also be a team member assigned specifically to the modelling activities.
It is clear that the building of a model without proper knowledge of the risks will likely lead to one that is inappropriate to address the needs of a project team and of decision-makers. It is highly unlikely that a model built without appropriate alignment with the general risk assessment process would contain the right variables, be built at the right level of detail, or have formulae that are flexible enough to include the effect of risks or their impacts within them, or to allow additional risks to be readily incorporated within them.
On the other hand, it can be challenging for a modelling analyst to receive the required inputs from a general risk assessment team:
Thus, it is often the case that a general process team (that is focused on risk mitigation management) may have less perceived need or incentive to alter the nature of its activities to accommodate the needs of modelling activities.
There is, therefore, the possibility that activities that are undertaken within a general risk assessment process are not properly reflected in the modelling activities, due to lack of communication or of joint incentives (or responsibility) for the overall output of the process.
This is not to say that teams conducting general risk assessment activities have no incentive to interact appropriately with modelling activities. However, such incentives are inherently much weaker than is the incentive of the modelling analyst to have appropriate input from the general process. Indeed, a general risk assessment team would have some incentives in this respect because it is also the case that a process that is conducted in isolation of the modelling activities will also be conducted inefficiently to some extent:
Thus, where a quantitative risk model is required as an output of the general process, then the success of each will be determined by a close coordination and alignment between the processes. Nevertheless, the (typically more senior) general process participants generally have a much weaker inherent need for such alignment of activities than does a modelling analyst.
Thus, it is incumbent on process leaders and senior management to define the approach that is to be used with respect to risk quantification, and to ensure that all participants in the process are appropriately engaged and have clear objectives as to the role of, and need for, quantified risk modelling.
Of course, any successful implementation needs to have significant genuine “bottom-up” support. In addition, it is probably fair to say that management's role must be much more proactive than just supporting any bottom-up initiatives; senior management needs to drive the implementation to make it happen. With the multitude of challenges that relate to organisational processes and culture, a strong “top-down” implementation is almost always necessary (if the use and acceptance of risk modelling is to extend widely into the organisation, and not be conducted only by a few specific individuals). Many of the challenges would otherwise be insurmountable for individuals or small groups of like-minded people, and others would create additional difficulties that in their totality may also be too challenging to overcome.
Examples of ways in which implementation can be driven top down include:
Where cultural norms mean that lower-level staff generally prefer to present an optimistic story to higher level management, it is easy for issues to become hidden until they are critical (or too late), and for assumptions to be too optimistic.
Indeed, attitudes from management such as “bring me solutions, not problems” are common:
Whilst it may be advisable for lower-level staff (as far as possible) to make an effort to frame their communication in positive terms (e.g. “uncertainty” may be less negative than “risk” and “opportunities to improve project performance” are the flip side of risk-mitigation or risk-response measures), there is a responsibility on all layers of management to achieve appropriate levels of delegation for project responsibility, whilst being open to risks being escalated. For example, management may communicate that they expect to see a range of possible outcomes in which in some cases the project is not as successful as desired or fails, and to see the causal factors of this. Additionally, management must be willing to “roll up one's sleeves” to support the appropriate risk-responses, and not “shoot the messenger”.
Most organisations (apart from perhaps the smallest) have decision-makers who are separate from other key process participants. Specifically, there is very often a separation between senior management and those who provide the information on which management make decisions, as well as shareholders and other important stakeholders.
This separation can hinder the creation of a desire to base decisions using objective methods, such as robust quantitative risk assessment, and rather to rely on their own knowledge, judgements, intuition and biases. In particular, one can surmise that as long as the costs of poor outcomes are borne disproportionately by others, and that any particular decision-making basis is widely accepted as being appropriate (so that its use is not regarded as a weakness), then the accuracy or validity of that basis is largely irrelevant. The incentives for decision-makers to make the genuinely right decision can be outweighed by other factors. Cases such as the widespread acceptance of “static” forecasts (as well as the Financial Crisis of the early 21st century) are perhaps driven by such issues.
In particular, the use of static forecasting methodology (which provides a forecast that is essentially always wrong, despite being widely accepted as a methodology) may be preferred by decision-makers and also create a justification for the use of biases, intuition and personal preferences in the decision process: if the project in question is implemented, then two cases will arise:
On the other hand, if a full and robust risk assessment had been conducted prior to the project, then this risk assessment would have shown a mix of outcomes, some positive and some negative. If an unfavourable outcome occurs (due to a risk scenario that was represented in the model), then the attribution to a poor forecast would not really be possible.
Thus, to some extent, the embedding of risk assessment approaches within management decision-making creates a measure of transfer of accountability for bad outcomes to decision-makers in a way that would have been less clear without it; this can create both a challenge and a benefit.
One of the key organisational challenges in implementing risk-based planning processes is to ensure that the conflict with incentive systems is minimised.
Most situations involving risk contain a mixture of controllable and non-controllable items: the identification and implementation of risk-response measures is controllable, whereas – once such measures are implemented – the actual occurrence or extent of impact of a risk or uncertainty is not. In other words, the response measure will (for example) alter the likelihood or the impact of a risk or uncertainty, but not change the fundamental situation as to whether (or with what impact) a risk or uncertainty will arise.
Implicit in setting (appropriate) incentives is that one has some control over the aspect of the situation that is being incentivised. Indeed, if incentives were set in a situation in which there were truly no control, the effect of doing so may have a negative consequence; it could lead to feelings of unfairness when some staff are awarded bonuses whilst others (perhaps more capable or who worked harder) were not.
Broadly speaking, one may distinguish between two categories of incentives:
Many managers have a (deep-seated) intuition to prefer outcome-driven incentives. This is probably driven by the fact that many situations are so complex that it is not possible to define all the scenarios that might happen. For example, the market may be changing so quickly with many possible new product innovations or competitor entries (within the timeframe applicable to the incentive system) that a process-based system may track activities that are not generating benefit. On the other hand, an outcome-based incentive could encourage the staff to respond dynamically in an appropriate way, as they find innovative ways to reach their targets, even if the nature of such adaptations cannot be foreseen in advance.
One can argue that in the presence of risk, a process-based incentive has some role, because some elements of the outcome are simply beyond the control of participants, even if they act in the best conceivable manner; it would be unfair to punish genuine bad luck (or to reward genuine good luck). Part of a process-based incentive scheme could therefore simply be to ensure that risk analysis is being used at the appropriate time by the appropriate staff.
One may be drawn to the conclusion that both approaches are necessary in general, with some form of weighting towards one or the other.
One of the most important applications of aggregate risk assessment is the calculation of the required contingency to include in a plan, as discussed in Chapter 4. In principle, with an appropriate risk model, the calculation of a contingency amount at the aggregate project level is straightforward. For example, one may budget with reference to the P75 of a project's costs (with contingency being the difference between this figure and a base case).
In practice, once contingency amounts are determined, the issue often arises as to whether (or how) to allocate this budget to the underlying individual components. In other words, the total figure is calculated by reflecting the uncertainty within each subproject or project task (or department, geographical area, business unit, project phase, time period, etc.) and aggregating these together, so that one may expect that this resulting contingency be allocated back to them.
The challenge in doing so relates directly to the discussion in Chapter 4, in which we showed (e.g. Figure 4.19) that in general (due to the diversification effect) there is not a linear mapping between the aggregate percentile and that of the individual components. Thus, for example, in the case of the model with five items (Figure 4.21), the P75 of the output resulted in a case when inputs were set at (approximately) their P60 values. Hence, the allocation of an aggregate P75 budget (so that each component has its “fair” proportion of it) would result in a situation where each component would exceed its budget in 40% of cases (whereas the aggregate project would only do so in 25% of cases). Such an amount may be regarded as insufficient by the staff responsible for that particular component; they may each desire a P75 budget for their component, which would lead to the total budget being around its P95 point.
Thus, there is an inherent conflict, with the possibility of resource hoarding and excess contingency (as discussed in Chapter 4). There is no perfect solution to this, as it is inherent in the nature of the diversification and combinatorial effect associated with uncertainty; thus, it is an organisational challenge.
One may consider centralising budgets at an aggregate project level, with a variety of degrees of strictness:
Clearly, such choices may have profound implications for organisational responsibilities, structures, authorisation and other processes, and generally for the fundamental relationship of power and control within organisations, and thus present potentially significant challenges.
The issues discussed in this chapter clearly show a multitude of challenges that need to be addressed in order to successfully achieve the benefits of risk assessment processes, especially those associated with full risk modelling. These typically involve changes to many areas, including:
One may aim to install a more risk-aware culture, with characteristics such as:
In principle, change management frameworks and tools can support this process. Indeed, the well-known Kübler–Ross model (for stages of grief) may be adapted as if it were to apply to the general change processes associated with the implementation of risk assessment:
As the framework shows, there are typically different phases, and these require patience, management of the fear of failure (and of failure itself), encouragement and leadership. Therefore, an important element of achieving success (in terms of widespread implementation and cultural change) is to have a strong top-down implementation, which acts as a catalyst to bottom-up activities.
The use of simulation tools (especially if they are user-friendly) such as @RISK can often provide fairly simple ways to demonstrate the benefits, create some practical results and generally support the process of cultural change. Of course, they are only a small part of the overall process, and are not a “silver bullet”, nor a substitute for the myriad of other changes required for a truly successful implementation.
18.217.211.92