CONTENTS

A project team may well find itself facing the challenge of epistemic uncertainty. The lead-up to this can be ambiguous and subtle, and often there is little warning. It is useful to foster a keen state of awareness that recognises and anticipates emerging adversity. This chapter provides some answers as to how epistemic uncertainty can be noticed, if not necessarily forecast traditionally. It addresses the issue of how we can notice more about those events impacting our project to which we have little knowledge of.

The lure of normality

On our road to ‘notice more’, there are some obstacles. A central challenge is somewhat self-inflicted and lies with our way of thinking. We are naturally tempted to fall back on our ideas of a world of normality because we tend to feel more comfortable with that; there is a range of lures that limit our noticing.

Expectations of normality

We want things to be fine. We would like the plan to unfold just as the schedule pinned to the wall shows that it should. We long for a continuous state of normality and therefore perhaps it is not surprising that such longings lead us to ignore indications of future failure. We may be surrounded by signs of potential adversity, yet because these do not fit our expectations, we tend to disregard them so that our stakeholders and we remain in a state of relative comfort.

Normalcy bias

In 79 ad, Mount Vesuvius, an active volcano just south of modern-day Naples in Italy, erupted burying the Roman city of Pompeii. In hindsight, the 16,000 inhabitants may well have survived if they had not spent hours watching the eruption until it was too late to escape. This lesson from history is an example of normalcy bias, or normality bias (more colloquially referred to as the incredulity response, and, most interesting of all, the ostrich effect). Normalcy bias can be defined as the tendency of people to minimise the probability of potential threats or their dangerous implications (Perry et al. 1982). Even when there is a certainty of disaster, people exhibiting normalcy bias will await confirmation from other sources (or just watch whatever everyone else is doing) before taking any kind of action (Omer and Alon 1994). As in Pompeii, people tend to underestimate the likelihood of a disaster and its possible effects and do not take action until it is too late. Essentially, it is the assumption that nothing bad will happen in the future because nothing bad has happened in the past.

Virtually every major disaster in history had some element of normalcy bias as a component. Because there have been years of safety in a city does not mean that it will never be targeted by terrorists. Because previous hurricanes have not caused major disaster does not mean that future ones will be equally benign, as Hurricane Sandy and Hurricane Katrina demonstrated. In the face of natural or other major disasters, the temptation is to blame government inaction, but even where there are preparations in place, people are often inactive until it is too late.

Normalcy bias can frequently be seen in business environments before some major business disaster. Eric Brown, the technology consultant, investor, and entrepreneur recounts how, when he worked at WorldCom in the 1990s, the company was experiencing dramatic growth and fantastic profits. He observes that everyone could see that this was unsustainable and that the organisation was going to implode at some point soon, but nobody was taking steps to tackle the problems in the company, or even trying to identify the problems. Brown notes that people were saying that nothing had happened yet so they doubted anything would happen any time soon. In 2002, WorldCom filed for, at that time, the largest bankruptcy ever. There were lots of reasons for people to be concerned about their operations and the business model but instead of trying to tackle these concerns people ignored them. Normalcy bias is a danger to organisations because people assuming everything is going to be all right will fail to plan for problems that might beset their organisation.

That said, normalcy bias is a perfectly natural behaviour. Psychological studies have found that about 70 per cent of people are affected by normalcy bias and that, in many cases, this is a good thing because it helps to pacify the other 30 per cent who are prone to over-reacting and losing control. However, the problem with this is that they also hinder those who are doing the right thing.

Complacency

The assumption of normality is reinforced by periods of success. Success is characterised by the absence of failures. Such accomplishments in the past make us think that we can expect the same to continue. We are increasingly focused on what has worked, rather than on what might go wrong. Hence, we may limit our attention with regards to signs of possible failure. Our minds are focused on what we want to see. We anticipate that this success will continue and any potential signs of looming adversity do not fit our expectations. This complacency means that we are more attentive to information that reinforces our confidence and less likely to pick up on small signs of trouble ahead.

Hindsight bias

Another bias related to cognitive psychology is hindsight bias, also known as the knew-it-all-along effect or creeping determinism (Roese and Vohs 2012). It is a tendency to see an event as having been predictable after the event, despite there having been little or no objective basis for predicting it. It is one of the most widely studied ‘decision traps’, with studies ranging from sports results and medical diagnoses to political strategy, including accountancy and business decisions. It results in people believing that events were more predictable than they were in reality, which can result in an oversimplification of cause and effect.

In organisation studies, hindsight bias is very frequently cited (Christensen-Szalanski and Beach 2006). The effects of hindsight bias are that people tend to overestimate the quality of their decision-making, where the results were positive, and underestimate the quality of decisions where the results were negative. For example, if a student drops out of a prestigious university and then goes on to found a multi-billion dollar company, the decision might appear to be far more intelligent than it is. Another effect is to see innovations from the past as far less inventive than they were as they become obvious in hindsight. A very common example is for people to overestimate their foreknowledge of an event. For example, traders may think they know a crash is coming every week for years in advance of the stock market crash coming. When the crash comes, traders conclude they ‘knew it all along’. The result is that individuals have overconfidence and begin to overestimate their insight and talent and possibly believe they can predict the future with some kind of accuracy. Another effect is sometimes known as ‘chronological snobbery’. People of the past are viewed as ignorant and irrational, and the effect is that people do not believe they can learn much from people of the past. Another effect of hindsight bias is that people do not plan or prepare adequately or collect sufficient information and data as this effort is regarded as a waste of time as last time they put the effort in, the result was one they ‘knew all about’ anyway.

There are three levels of hindsight bias (Roese and Vohs 2012):

  • Memory distortion, which involves misremembering an earlier opinion or judgment. People selectively recall information that confirms what they already knew to be true. As they look back on their earlier predictions, they tend to believe that they did know the answer all along.
  • Inevitability, centres on people’s beliefs that the event was inevitable. People try to create a narrative that makes sense of the information they have.
  • Foreseeability, involves the belief that people personally could have foreseen an event. People interpret that to mean that the outcome must have been foreseeable from the beginning.

Focusing on the familiar

Even so, some of the messages may make it onto our ‘radar’ of noticing. We are more likely to take into account the information we are familiar with. For example, if a potential problem looks similar to one we have encountered in the past, this familiarity helps us identify and make sense of it. Consequently, our attention may be drawn to those factors we are more accustomed to. Unfamiliar features, amplified by ambiguity, are more likely to be filtered from our attention.

Aiming at the measurable

Filtering out the unfamiliar failures goes hand-in-hand with paying attention to the measurable and quantifiable. If a piece of information lacks specificity, we tend to blank it out. Hence, we might only take into consideration information about a failure that is unambiguous enough to be processed. Further, that is to say, to be managed. As a consequence, we might take our project into the unknown, only noticing or wanting to notice aspects of failure that are ‘clear’ enough to us and that we have experienced in the past. However, because we have a tendency only to notice what we expect to notice, our ‘radar’, especially for uncertainty, is often not as sensitive as we would like.

Negative connotation

The term ‘risk’ (or aleatory uncertainty) is often understood as ‘bad things happening’. The upside, ‘opportunity’, is far less commonly discussed. Who would believe adversity to be a ‘good’ thing? These negative connotations frame the discussion in a bad light, as the language used hints at the manager’s inability to design all adversity out of a project in the first place. Overweighting the ‘bad’ in projects make us lose sight of opportunities.

Observational selection bias

Observational selection bias is the effect of suddenly noticing things that were not previously noticed and, as a result, wrongly assuming that the frequency has increased. The things we notice are not more common than they were before; we just are noticing them more. The problem with selection bias of this nature is that if the sample we observe is not a representative one, our resulting judgments can be seriously flawed (Denrell 2005).

A good example is to seek to identify the ways to do successful business by studying only existing successful businesses. It is a classic statistical trap of drawing conclusions from unrepresentative samples. Without also looking at unsuccessful businesses, people might privilege risky business practices, seeing only those companies that succeeded by adopting those practices and not seeing the ones who failed.

A classic case illustrates both how selection bias created problems and how it could be overcome. During WWII, a Jewish Hungarian mathematician, Abraham Wald (who had emigrated to the United States to escape Nazi Germany), was asked to look at the survivability of bombers. Aircraft that had returned had been badly damaged. Aircraft designers had added extra armour to parts of the aircraft that had been most badly damaged but this had not improved survivability and, in fact, losses had increased as the aircraft were now less agile. Wald reasoned that the aircraft that had returned were the survivors and instructed that the extra armour should be added to the undamaged parts. The surviving aircraft showed where non-lethal damage had been inflicted. The result was an increase in aircraft surviving bombing runs (Ellenberg 2015). This case illustrates how the aircraft designers were basing their decisions on a biased sample and how Wald understood that the correct sample was not even observable.

There are many ways in which managers in businesses base their decisions on biased selections. These include biased samples, either through self-selection or from drawing conclusions from too small a sample (often drawing conclusions from a sample of one!). Often, managers will dredge data or cherry-pick examples that prove their preconception.

Ultimately, managers are advised to beware of basing their decisions of very limited or biased samples. This can, in particular, impact risk-taking (Denrell 2003) as managers may well miss early warning signs of emerging risk or even misinterpret signs of problems as meaning that things are going fine.

Zooming in

When we are preoccupied with ‘doing things’, our capacity to look out for new signs of adversity is diminished. When we are fixated on – preoccupied with – what matters to us, we tend to lose the ability to detect other important information. Early detection of fixation is highly beneficial. In the aftermath of materialised uncertainty, even with the benefit of hindsight, we may not understand how we didn’t recognise the signs of impending difficulty.

Myopia

A fixation on getting things done, our expectation of normality and our focus on the tangible and measurable leads to a propensity to be ‘short-sighted’ − to look into the near future and ignore long-term scenarios. We may only consider the short-term future, leaving us unprepared for what could lie beyond our risk horizon.

Temporal discounting

Also known as time discounting or hyperbolic discounting, temporal discounting describes the tendency for people to seek instant gratification. They will choose a reward in the here-and-now rather than the prospect of a greater reward sometime in the future; and, the further into the future the reward is, the lower the value attributed to it. Temporal discounting leads us to downplay the future negative consequences of decisions that have positive consequences right now, and it helps explain why we tend to focus on the immediate consequences of an action without fully considering its long-term effects. It is often referred to as hyperbolic discounting as the effect can be shown as a hyperbola (as opposed to exponential) curve over time. In other words, for any option, when a certain time threshold is passed, the devaluing effect of time diminishes.

Neuroscience has discovered that people’s brains are geared to maximising rewards (Haith et al. 2012), which means we have a desire for greater satisfaction in the present. This means that we do not reason things through but make discounting decisions reflexively and automatically. This creates significant problems for businesses and helps to explain why so many companies avoid fixing problems until they are forced to, and why so many companies, and investors emphasise near-term success over long-term profitability. It also helps to explain the phenomena as varied as to why people do not save for pensions, why people struggle to lose weight, and why they have problems with alcohol and substance abuse.

Key enablers to the art of noticing

In the previous section, we identified some of the behavioural shortcomings that limit our ability to notice, in particular, epistemic uncertainty. There is help, though! In the following sections, we provide an overview of ‘what’ one can do and ‘how’. We elaborate on how to notice and, in particular, how to detect signs of adversity which lack the specificity of risk. The key issue of noticing is – traditionally – one of living in the past and believing that the future will unfold as the past did. Worse, in the absence of failure, we gradually reduce our awareness of the need to ‘look out’ for warning signs that are unfamiliar and immeasurable, yet which could provide us with an early warning that something is not right. So what does this mean in practice for the project manager?

Acknowledgement

To notice indications of epistemic uncertainty, one needs to understand the environment that is at stake: the project. Noticing epistemic uncertainty without knowledge about the human, technical, organisational, and environmental factors that determine the success of a project is comparable to trying to find a needle in a haystack without understanding the concept of a haystack. Therein lies the challenge. Projects are inherently uncertain and complex. Any understanding of ‘how it works’ might be temporary, at best, and necessitates a knowledge of the project ‘here-and-now’ as well as the needs and wants of the multiple stakeholders.

So the first enabler of greater resilience is an acknowledgement that our world is uncertain; it is a message of ‘we do not necessarily know’ that is powerful and yet often unnerving, as it counters our longing for certainty and comfort.

Vigilance

Accepting this knowledge deficiency and the resulting epistemic uncertainty should trigger an almost permanent state of unease. This unease stems from the project constantly being assailed by uncertainty: every new action and activity is an opportunity for something to go wrong or to work out unexpectedly. This is not, though, a reason for project participants to panic. Rather, it is a reason to be alert and attuned to the possibility that any minor error, problem, or close call could be symptomatic of a flaw in the wider project system. Such events must be scrutinised with the ‘big picture’ in mind. What may be the possible wider implications?

This means that the process of anticipation should never really cease, but ought to manifest itself in the project environment. This should result in heightened alertness for the project manager and the team. They must be alive to the vagaries of the project, not just in terms of variances in the overall schedule and budget but also in terms of the wider project system and its multiple interconnected aspects.

Freedom to be vigilant

This sense of unease and the alertness to the signs of materialising uncertainty should not come out of thin air. What is required is ‘space’ in the form of time to spend on spotting failure. Project managers who are preoccupied with administrative tasks may be less sensitive to any failure happening around them. They need to be given the freedom and opportunity to be on the lookout – permanently – and to ask potentially inconvenient questions. This involves spending time close to what is happening. This is often not what we see project managers doing. Sometimes, organisational incentives and systems drive managers to focus on detailed reporting and ensuring adherence to the original plan. This is all well and good, of course, but execution with only limited alertness is like crossing a busy street with only one eye on the traffic.

Reporting culture

It is not just down to the project manager to be on high alert. Emerging uncertainty should be spotted throughout the project by the various participants. To create effective project awareness, though, warning signs of impending adversity needs to be reported quickly and honestly. Organisational culture (and the lack of incentives) often means that ‘bad news’ is not passed on to others. Silo thinking and the negative connotations of such ‘inconveniences’ can prevent people from reporting failures, both small and large. When adversity strikes or individuals suspect poor performance, they need to share this information confidently and promptly, without the fear of being blamed or considered a troublemaker.

Cross-functional teams

Projects are best run with the benefit of multiple perspectives. The richness of different views from a range of involved stakeholders offers the opportunity to augment one’s own ‘radar’ for what is and might be going wrong. The diversity here does not just cover gender, ethnicity and cultural background, but also the work background and expertise of participants. Taking on board the views of legal, commercial, finance, marketing, operations, procurement, and human resources, as well as those of technical representatives, brings in much more knowledge and insight. Every ‘lookout’ will be vigilant about the area he or she is familiar with and bringing them all together allows a wider sensitivity to what may go wrong or is going wrong. This does not imply, though, that multiple – often diverging – perceptions should be moulded into a single consensus view that consequently becomes anchored as a commitment. The purpose of a cross-functional perspective is to provide a rich picture, often with contradictory views, to provoke richer noticing. In this respect, it is not intended to enable simplification.

Intelligent tools

Most project management tools are based on the rationale of turning an uncertain environment into a single deterministic future. Instead of challenging a project manager’s assumptions, they often reinforce the illusion of certainty, providing single estimates that are turned into commitments and corresponding simple ‘pass/fail’ criteria. Rarely is the real world this straightforward.

Any risk management system should include mechanisms to look beyond the short-term risk horizon and incorporate concepts beyond the merely more tangible criteria. It should include variables such as confidence, controllability, interdependence, and proximity to try to capture uncertainty and complexity. There are, indeed, alternatives out there to traditional risk management. A range of tools is available – among them scenario planning (see Chapter 8) – which is not designed primarily to provide an accurate prediction about a single future but to make you appreciate the variance and richness in predictions. Alertness requires tools and techniques that not only help project managers deal with the repeated past but also allow us to address uncertainty and complexity in our predictions.

Leading the art of noticing

Making people ‘aware’ in a project is a challenge and requires a leadership approach that generates honesty, transparency, and openness towards epistemic uncertainty. We do not suggest a state of paranoia in which a project is constantly thinking and ‘living’ failure. Such a state may only lead to exhaustion and fatalism. However, a state of ‘healthy’ uneasiness – a heightened and yet focussed awareness of epistemic uncertainty, is one that we can strive for. The following actions may provide a start.

Battling complacency

The prolonged perception of the absence of failure, good as it might feel, is most often an indicator that people have taken their eye off the ball. At the centre of the battle against complacency is the need to make people uncomfortable by challenging them and making them less certain. Certainty about future results is comforting and pleasing, but it can be illusory. Try to play the role of ‘Mr/Ms Sceptic’. Be sceptical about people’s optimism that everything is going according to plan. With the help of scenarios, cultivate possible outcomes to make people think of what they should be looking out for to prevent these things from happening. Provide them with a safe culture to speak up about whatever concerns them. Even if these concerns are not tangible enough to make it on to a risk register, they are valuable pieces of information that may indicate the existence of uncertainty. In this sense, worrying is positive and valuable. First and foremost, do not try to battle complacency by turning it into a ‘tick-box’ exercise. Forms and documents cannot capture subtle but important information, such as emotions and gut feelings.

You may pick up signs of complacency: in meetings, people may be preoccupied with sharing their successes. Status reports may outline what has been achieved. It is the task of a leader to acknowledge and celebrate success, but also to put a question mark after every success story and to ensure time is available to raise concerns. Subtly change the way the discussion is going from what has gone right to what might still go wrong.

Moving the onus of proof

Our minds like a state of ‘normality’. Long periods of success may only reinforce the perception that failure will not trouble us. Managing projects can turn into a routine exercise, and, indeed, this is what most project management frameworks advocate: consistency in action as a means to reduce human situated cognition as a source of error. Under these conditions, the expectation can become one of continuing success, and it may be hard to persuade project participants of contrary possibilities. The longer this positive state of affairs continues, the more difficult it can become to raise doubts and concerns. Notions of epistemic uncertainty may be ignored or suppressed. Here, managers need to move the onus of proof. Assume that the project is inherently uncertain and complex until proven otherwise. When presented with audits and status reports that show a lot of ‘green’, show doubt and investigate; challenge people in their perceptions. This does not have to be confrontational but should go below the ‘surface’ to challenge assumptions and create greater awareness of how the project could still go off track.

Many audits in projects provide proof that everything is ‘all right’ (organisational incentives tend to encourage this). As a thought experiment, can you imagine an audit that provides evidence of uncertainty – about the extent of what we do not know about a future state – and complexity and yet also offers some form of evidence that the project is in a state of heightened readiness and preparedness to deal with it?

Making people imagine

We tend to focus on risks (aleatory uncertainty) because they are tangible and measurable and thus provide us with the comfort of (relative) certainty. But with the help of some tools, such as scenario planning, we can make people imagine beyond the risk horizon – ambiguous and difficult to measure – from a range of different perspectives. This is unlikely to provide accurate predictions about how the future will unfold, but it makes people appreciate the richness of multiple possible futures. It also gives ‘permission’ to worry, express doubts, and raise concerns that cannot necessarily be quantified in traditional risk management. The inability to ‘prove’ the existence of an issue or concern must not preclude it being brought to attention. As project leader, we can – and should – use tools that strive for accuracy and prediction. Know their limitations! Use additional techniques, not to determine a single, most likely future, but to strive to explore the murky uncertainties that normally remain undiscussed.

What we cannot measure and articulate with confidence makes us uneasy. It is preferable to stay in our comfort zones, and not worry about, let alone raise, such concerns. To reject someone’s worries out of hand, or perhaps challenge them for data, sends a clear signal not to try that again. It is a delicate exercise to build greater vigilance − hard to promote, yet easy to discourage. The key is to make the team ‘at ease’ with feeling uneasy about uncertainty and complexity.

The illusion of asymmetric insight

People are tribal. They may claim to celebrate the diversity of opinion and respect others’ points of view but, in truth, they tend towards creating and forming groups and then believing others are wrong just because they are others. This is the essence of the illusion of asymmetric insight (Pronin et al. 2002).

This ‘us and them’ perspective leads to people creating views of others that are biased and partial. The source of this bias seems to stem from the unshakable belief that what we observe in others is far more revealing than our similar behaviours. It leads to stereotyping and prejudice of the ‘other’ and also underestimating threats from others (or, for that matter, possible partnerships and alliances).

It has been shown that we are self-deluding if we believe this and yet all human beings do this. We all hold a personal conviction that observed behaviours are more revealing of other people than of the self, while our inner voice of private thoughts and feelings are more revealing of the self. We also believe that the more we perceive negative traits of someone else, the more doubt we express about this person’s self-knowledge, although we do not do this for ourselves.

This behaviour fosters conflict and misunderstanding. When others seem to see the world differently from oneself, one tends to see one’s views as objective and correct, and theirs as irrational and wrong. It also means that people are irrationally closed-minded and blindly conforming – which can create serious problems in organisations that seek to be ‘alive’ to novelty and responsive to change (Pronin et al. 2007).

Stripping adversity of its negative connotations

The notion of risk and uncertainty as something ‘bad’ – although inherently natural – can be confronted by a project leader by providing different labels and connotations. Why not call risk a ‘design evolution’, for example? Detach the existence of uncertainty from any perception of incompetent planning, and frame its management – proactive and reactive – as an opportunity that justifies some form of reward. For example, proactive responses to uncertainty, irrespective of whether they fully work or not, should be highlighted as having prevented a worse state of affairs. Show your appreciation for the action and point out what could have happened if the measures had not been taken. The message of support for pragmatic responses is an important one. Again, if the organisational culture is one of blame and reprisal, then staff closest to the issues are unlikely to pay attention to and respond.

Focus on the issue, not the person reporting it

The negative connotations of forms of adversity may suppress the will of project members to share and report their concerns − the act of reporting might be interpreted as an acknowledgement of incompetence in preventing the failure in the first place. It is the project leader’s task to focus discussion on the reported issue, not on the person reporting it. Hence, any discussion about adversity should be impersonal, although the response should have a clear owner. The focus must be on the message, not the messenger. Appreciation of where a worry or concern originates from can be useful, but the individual identifier needs to be encouraged to come forward, not incentivised to keep quiet.

Encouraging the sharing of adversity

It takes effort to share noticed adversity effectively. Provide your project team members with the freedom to speak up and share their perceptions. This may involve an ‘open-door’ policy or a standing agenda item in project meetings. Project members are then more likely to share these encounters with uncertainty with others, to allow everybody on the project to appreciate the potential problems. Often, such sharing is done by completing a form which is then reflected in an impersonal spreadsheet. Use the power of interpersonal interaction to augment this. By all means, use the necessary documentation, but build on it with more socialised sharing. People buy into stories and ‘real’, personal, accounts of uncertainty far more than into reading a document. Make uncertainty ‘alive’ to others, encourage and reward people for speaking up.

Shared information bias

Shared information bias (also known as the collective information sampling bias) is known as the tendency for group members to spend more time and energy discussing information that all members are already familiar with (e.g. shared information), and less time and energy discussing information that only some members are aware of (e.g. unshared information). Harmful consequences related to poor decision-making can arise when the group does not have access to unshared information (hidden profiles) to make a well-informed decision (Stasser and Titus 1985).

The shared information bias may also develop during a group discussion in response to the interpersonal and psychological needs of individual group members (Thompson and Wildavsky 1986). For example, some group members tend to seek group support for their own opinions. This psychological motivation to garner collective acceptance of one’s initial views has been linked to group preferences for shared information during decision-making activities.

The nature of the discussion between group members reflects whether biases for shared information will surface. Members are motivated to establish and maintain reputations, to secure tighter bonds, and to compete for success against other group members. As a result, individuals tend to be selective when disclosing information to other group members.

In many ways, this kind of behaviour is counterintuitive. It seems strange that people are not eager to bring new information to group meetings to develop ideas, develop knowledge and help with decision-making. It seems that there are three main reasons why people might be reluctant to share new information (Wittenbaum et al. 2004). First, shared information is more readily recalled so is likely to be thought of first in group settings. Second, people have often decided beforehand what is important and previously shared information is what people tend to base their pre-judgments upon. Finally, and perhaps most importantly, people are often anxious about how they will be seen by other members of a group and shared information tends to take precedence. It has been found that people are regarded as being more capable when they talk about shared rather than unshared information (Wittenbaum and Bowman 2004).

In order to overcome the problems of shared information bias, consideration needs to be given to the dynamics of groups and organisational culture (Thompson and Wildavsky 1986; Wittenbaum and Bowman 2004). The foremost thing to do is to discourage groupthink. Where groups display less groupthink, they are likely to share unpooled information more readily. Additionally, expert knowledge is very important. First, if people in a group recognise the relative expertise of different group members, they are more likely to be open to new information. Second, lower status group members are more likely to be ready to speak up and contribute new information if their expertise is acknowledged by higher status group members.

The impact of noticing on relationships

Noticing more, in principle, is a good thing. However, it has the potential drawback that it may confuse and unnerve your stakeholders. You may appreciate the nuances of your project and be comfortable with the discomfort of ‘not knowing’ with confidence, but it can be challenging to communicate this to your stakeholders.

Certainty is an illusion

Being on the lookout and noticing beyond the risk horizon, of something going or potentially going ‘wrong’, is a tacit acknowledgement that standard planning concepts are perhaps flawed. All the efforts that have gone into predicting the future, although valuable, are insufficient to design uncertainty out of the project completely. Uncertainty is still there, and therein lies the opportunity. Projects are often sold on the premise of certainty, so stakeholders can sit back and see a plan turn into reality. Stakeholders such as sponsors need to understand that epistemic uncertainty is ‘normal’ and that, despite all the efforts that go into planning, estimates remain estimates, or are mere speculations. Many aspects remain unknown. Without such an acknowledgement, there is limited need for vigilance and desire to look beyond what has been planned for.

Mindful practices

This vignette on shortening the planning horizon looks at arguably one of the most hotly debated and contested aspects of project management. A range of organisations shorten their planning horizons under the umbrella of ‘Agile’ project management. However, we wish to highlight that this approach could and should be applied in projects characterised by uncertainty, regardless of whether the fundamental approach to managing uncertainty is a (mini-)waterfall approach, or interactive and incremental.

Iterations

At Intel, projects are predominantly run by following the agile philosophy. Part of that philosophy is the definition of iterations. Iterations are single planning and development cycles, ranging from two to six weeks. These cycles are of fixed lengths within a project but can vary across projects:

At the end of an iteration, there may be the release of output, the achievement of a milestone, or a design review. However, it is not the output of an iteration that defines its length as duration is fixed for repeatability. Releases to the customer can be made at the end of one or many iterations or more frequently in alignment with customers’ needs. After the completion of an iteration, it is reviewed and critiqued by stakeholders – such as the end-user – to accommodate flexibility in revising the ultimate goal to be achieved and the way to get there. This form of instrumentalism allows managers to plan an iteration based on learning gained from the previous one:

The benefits of having such incremental iterations in place are numerous:

  • Transparency and visibility: stakeholders receive insight and give feedback not just at the end of a project but throughout.
  • Flexibility: frequent reviews and critiques allow timely changes in what to do next and where to go.
  • Collaboration: iterations ‘enforce’ frequent interactions between stakeholders and the provider, with influence from both sides on changing ways of working and the goals to be achieved.

The enablers to achieving such visibility, flexibility and collaboration are based on setting expectations. Parties involved need to accept project autonomy, with everyone having their say. That autonomy is built on trust. Iterations are not purposeful if they are only used as a means of checking what another party does. Instead, beyond providing transparency and thus visibility, they offer a platform to experience progress in delivering a solution while having the comfort of flexibility to change it:

Another form of using iterative planning is ‘Rolling Wave’ planning. This is a process whereby you plan part of the project while the work is being delivered. As the project proceeds and its latter stages become clearer, additional planning can take place. At the outset, high-level assumptions are made and broad milestones set which become more concrete as the project progresses. As activities are undertaken, assumptions become better defined, and milestones become more precise.

At TTP, it is often the case that the client is unsure of exactly what they need to do to solve their problem. Indeed, they may be unsure of the true nature of the problem itself. TTP’s project leaders must be able to adapt their planning:

Each phase constitutes, in principle, a new contract, yet with shortened planning horizons to allow greater flexibility in adjusting each phase in the light of new information. Rolling wave planning at TTP takes uncertainty into account and provides the benefit of not having to ‘fix’ the entirety of the project.

Functional circumstances frequently constrain the duration of a phase. As it is applied in TTP, phases are often defined by their functions of concept, design, and production. Alternatively, phases can be demarcated by a certain level of confidence. The only ‘fixed’ durations are for those phases for which project managers are sufficiently confident. If estimates are deemed unreliable, the planning horizon will be shortened accordingly. Uncertainty in estimating – characterised by the level of confidence – plays an important role in determining how long a ‘wave’ may be.

Whether they are called rolling wave or incremental, iterative planning, these concepts have one thing in common – shortening the planning horizon to accommodate uncertainty. The flexibility in goals, approach and our interpretation of these aspects – it is all in the eye of the beholder – requires constant collaboration with all stakeholders involved.

At Aviva, similar to what is being prescribed in major agile project management standards, interactions between stakeholders are facilitated in the form of ‘Scrum’ meetings, to open a discussion on how they could potentially do something differently:

Participants at these meetings are all major stakeholders. The meetings do not tend to last longer than 15 minutes, to keep the discussion focussed on relevant reflection and corrections, and also to address the overall project goal and approach.

For the sake of emphasis, three questions are addressed:

  • What did you do yesterday?
  • What will you do today?
  • Are there any impediments in your way?

These questions may sound mundane and, if repeatedly asked, stakeholders might find them distracting and irrelevant. However, the purpose is to ensure the ongoing transparency of what is going on with all the stakeholders together. Issues and blockages can then be quickly resolved, allowing the project to move on.

Traditional project management may entail monthly planning cycles and weekly interactions. However, all our organisations – Intel, TTP, and Aviva – appreciate the need to shorten the planning horizon in their projects; to have short iterations and daily updates.

What is mindful about it? Venturing beyond the risk horizon requires an iterative process of planning. It is a continuous process of challenging mindsets of what is ‘not known’, perceiving and re-perceiving what epistemic uncertainty entails, and whether our actions match the present situation or a plan that is already ‘out of date’.

Referring to the methodology of Agile Project Management, a planning cycle tends to be 30 days to six weeks. It is consistent with the degree of foresight individuals can make sense of and confidently cope with.

Reporting beyond boundaries

One might assume that the project team does all the noticing. But why not enlarge your ‘radar’ beyond your internal boundaries? Use the wider group of stakeholders as lookouts who are vigilant enough constantly, or at least repeatedly, to raise their concerns with you. Their interests and yours should be aligned with regards to everyone’s desire to see the project succeed. If possible, initiate this right at the start of the project, to involve them imagining beyond a short-term risk horizon. This can create a shared understanding of the project environment and how best to handle it. Without their engagement as part of project radar, they may interpret any unplanned change as a surprise.

You have a choice in addressing uncertainty. You can choose to ‘sell’ your project by showing (off) your planning and portraying the project as certain. In this case, there may be a limited need to be on the lookout for epistemic uncertainty, difficult as it may be to do so anyway. However, you also have the opportunity to use your stakeholders’ capability to be on the lookout, and consequently to integrate them into your noticing radar. Hence, in meetings with them, feel free to ask about their opinions and gut instincts. Be reluctant to focus solely on what has gone well in a project; drive the discussion about the future, beyond the risk horizon. It is always worthwhile asking ‘What do you think might go wrong?’

Consistency in relationships

The ability to notice and share beyond boundaries forms the pillars of an ‘informed’ culture, in which all parties understand each other’s perspectives, even if they are mutually contradictory. This implies that if stakeholders understand your perspectives, they are more capable of looking out for you and noticing on your behalf. It is less a notion of ‘I know better’ than a way of adding to the richness of the overall project knowledge and understanding. It requires, however, that you guide your stakeholders to an understanding of your position, and vice versa. Vigilance can only be instilled if parties learn to understand each other’s stances through communication, transparency and trust.

Paying for supposedly idle resources

To be vigilant, to notice beyond the measurable and tangible, one needs ‘space’ in the form of time and resources to look out, to challenge others in their complacency, to encourage and motivate people to do it themselves and actively share their perceptions, perhaps even to think and reflect. Such activities do not necessarily directly contribute to the execution of the project; they help you to prepare and ready yourself for something that may never materialise. Hence, the resourcing of such activities is by no means uncontroversial (especially for budget-holders). A heightened state of awareness comes at a price, without necessarily producing tangible and measurable outcomes. Stakeholders require a shared understanding that noticing more is an art for which faith in its success replaces proof.

Kodak – a failure of noticing

In January 2012, Kodak, an American technology company that concentrated on imaging products and had invented the hand-held camera, filed for bankruptcy. What was once considered a hub of technological wizardry suddenly became an institution with little hope of surviving much longer into the future.

The demise of Kodak, like nothing else, highlights the ongoing need for top-level managers to cope with the effects of uncertainty. The use of photographic film was pioneered by George Eastman, who started manufacturing paper film in 1885 before switching to celluloid in 1889. His first camera, which he called the ‘Kodak’, was first offered for sale in 1888. It was a very simple box camera with a fixed-focus lens and single shutter speed which, along with its relatively low price appealed to the average consumer. The first camera using digital electronics to capture and store images was developed by 1975. The adoption of digital cameras was slow. In 1999, with the rise of broadband technology to share digital images, the demand for stand-alone digital cameras exploded, fuelled by the introduction of the iPhone in 2007. The volatility in the environment, amplified by the rise of the smartphone, caught Kodak off guard, partially because of its lack of understanding of market volatility.

Epistemic uncertainty is associated with a lack of predictability about how the environment will unfold, and the lack of awareness and understanding of developments, issues and events:

Recent generations of Kodak manager were too wedded to the past business model to take the radical steps needed to reposition their company as a digital leader. In other words, they were too comfortable about their business model, assuming however the environment will change around Kodak, their ‘proven’ ways of working will weather any storm. Kodak, as an organisation, became too comfortable in believing in their invincibility.

To notice more, organisations need to create organisational instruments of discomfort, such as those described in this chapter. People in an organisation need to be able to speak up about potential failure, they need to imagine beyond the risk horizon before the competitions impose its will on the organisation. The potential to fail as an organisation needs to be brought to the forefront, so the organisational mind creates a capability to notice more warning signals of change and impending failure.

Towards an art of noticing

The art of noticing is built on the need to look beyond the risk horizon, beyond what we expect to be ‘normal’, and beyond established organisational boundaries. The benefit is not necessarily to increase the accuracy of prediction, but – to put it simply – to keep on noticing more and become increasingly aware of issues beyond the measurable and familiar, beyond aleatory uncertainty. Such a heightened state of awareness towards epistemic uncertainty is characterised by a healthy uneasiness (although without switching to a state of paranoia) about the unknown and unexpected. Such a project is one in which participants keep their eyes on what is or might be going wrong instead of blindly focusing on what has gone right or is expected to go right.

Reflection

How well do the following statements characterise your project? For each item, select one box only that best reflects your conclusion.

Fully disagreeNeither agree nor disagreeFully agree
We acknowledge that our initial estimates are just that, estimates.12345
We communicate uncertainty in our planning.12345
People are provided with ‘space’ (e.g. time) to look out for things that could go wrong.12345
Fully disagreeNeither agree nor disagreeFully agree
We aim to increase the perceived uncertainty of project participants.12345
People are constructively challenged in their estimates.12345
We make people think about the uncertainties that cannot be specified.12345
Fully disagreeNeither agree nor disagreeFully agree
We use intelligent tools and techniques that not only take into account what we know but also what we do not know.12345
People are encouraged to share risks and uncertainties beyond their set boundaries.12345
Risk and uncertainty are seen as something ‘good’ to look out for.12345

Scoring: Add the numbers. If you score higher than 27, your capability to be more discriminatory in your noticing of uncertainty is good. If you score 27 or lower, please think of how you can expand and enhance your capability of noticing uncertainty beyond the risk horizon.

References

Christensen-Szalanski, J. J. J., and L. Roy Beach. 2006. “The Citation Bias: Fad and Fashion in the Judgment and Decision Literature.” American Psychologist 39(1): 75–78.

Denrell, J. 2003. “Vicarious Learning, Undersampling of Failure, and the Myths of Management.” Organization Science 14(3): 227–432.

Denrell, J. 2005. “Selection Bias and the Perils of Benchmarking – Harvard Business Review.” Harward Business Review 83(4): 114–191.

Ellenberg, J. 2015. How Not to Be Wrong: The Hidden Maths of Everyday Life. London: Penguin Books.

Haith, A. M., T. R. Reppert, and R. Shadmehr. 2012. “Evidence for Hyperbolic Temporal Discounting of Reward in Control of Movements.” Journal of Neuroscience 32(34): 11727–36.

Munir, K. 2016. “The Demise of Kodak: Five Reasons.” Wall Street Journal. http://blogs.wsj.com/source/2012/02/26/the-demise-of-kodak-five-reasons/.

Omer, H., and N. Alon. 1994. “The Continuity Principle: A Unified Approach to Disaster and Trauma.” American Journal of Community Psychology 22(2): 273–872.

Perry, R. W., M. K. Lindell, and M. R. Greene. 1982. “Threat Perception and Public Response to Volcano Hazard.” Journal of Social Psychology 116(2): 199–204.

Pronin, E., J. Berger, and S. Molouki. 2007. “Alone in a Crowd of Sheep: Asymmetric Perceptions of Conformity and Their Roots in an Introspection Illusion.” Journal of Personality and Social Psychology 92(4): 585–955.

Pronin, E., D. Y. Lin, and L. Ross. 2002. “The Bias Blind Spot: Perceptions of Bias in Self versus Others.” Personality and Social Psychology Bulletin 28(3): 369–813.

Roese, N. J., and K. D. Vohs. 2012. “Hindsight Bias.” Perspectives on Psychological Science 7(5): 411–264.

Stasser, G., and W. Titus. 1985. “Pooling of Unshared Information in Group Decision Making: Biased Information Sampling during Discussion.” Journal of Personality and Social Psychology 48(6): 1467–78.

Thompson, M., and A. Wildavsky. 1986. “A Cultural Theory of Information Bias in Organizations.” Journal of Management Studies 23(3): 273–862.

Wittenbaum, G. M., and J. M. Bowman. 2004. “A Social Validation Explanation for Mutual Enhancement.” Journal of Experimental Social Psychology 40(2): 169–841.

Wittenbaum, G. M., A. B. Hollingshead, and I. C. Botero. 2004. “From Cooperative to Motivated Information Sharing in Groups: Moving beyond the Hidden Profile Paradigm.” Communication Monographs 71(3): 286–310.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.142.98.108