images

10
It's Not About Ideas

The most important question to ask ourselves — in any moment or context — is, Are we making meaningful progress?

In the ‘Experiments' quadrant of Quest-Augmented Strategy, meaningful progress is determining if a strategic option is viable. You might be wondering why this is not about coming up with cool ideas. In fact, we've leapt straight from exploration to experimentation while barely mentioning the ‘i' word.[1]

This is not to say that ideas aren't important; they are. They are an essential component of how we think and learn. But if you're doing this right, ideas should be happening most of the time as a result of good thinking.

The word ‘idea' is a combination of its Greek meaning of ‘form and pattern', and its base ‘to see'. So, in other words, ideas are patterns we've sensed or detected. If we have established a diverse and stimulating feed, and have allowed ourselves to be immersed in challenging new future contexts, it's very likely we'll have woven a few patterns together into ideas.

These new ideas are certainly not part of our default thinking,[2] but some might make up some of the strategic options in our quiver. If so, great! When a particular future seems likely to emerge, we can pull these options from our quiver and start testing their validity.

We do this testing by taking a science-based approach (see chapter 11). But before we can do this, we need to get over the mystical reverence in which we hold ideas.

COMING UP WITH THE ‘BEST IDEA' IS THE WORST IDEA

An element of my work involves speaking at conferences and events. Whenever we get a request of this nature from an enterprise, the first thing my team attempts to do is get a sense of the strategic intention behind the event. Sometimes the event has a real strategic imperative, providing a chance to rally folks together and shine a light on the path ahead. From here, we focus on the effect of the event — what we want people to think, feel and do as a result. Next, we can work out the relevant motivation strategy and design for before, during and after the event, and I can then tailor my contribution to serve this.

But sometimes no strategic imperative is apparent. The organisers are busy, and doing the best they can with the limited time and resources they have. And so I tend to get the default briefing — for example, ‘We'd really like you to challenge their thinking, to get them motivated, to think outside the box, to be more innovative' (and so on). Not satisfied with this, I probe and begin to challenge their thinking within the briefing call — encouraging them to think beyond the default conference scenario, to lift their sense of what's possible, and to begin to co-create something brilliant.

In one such briefing call, the organisers revealed that their conference theme was ‘innovation'. ‘We're hosting a festival of ideas,' they told me.[3] And so I frowned (over the phone), and began to probe further. They soon revealed they were intending to run a competition to reward the ‘best idea' generated during the event. Throughout the event, participants would break up into teams to workshop new ideas. The process would then culminate in a short pitch from each team on the last day of the event; ideas would be judged and a prize would be awarded to the team that came up with the best idea.

This is not an uncommon approach. In large enterprises, senior leaders often share a certain frustration — a yearning for fresher thinking and practical ideas that will enable them to be more innovative. They often also perceive a lack of time to dedicate to this, thanks to the Curse of Efficiency. And so competitions, much like goal-setting, become the go-to for busy executives — they are an easily implementable, nicely visible box to tick that says, ‘See, we're doing something about this.'[4]

But true innovation exacts a toll. This toll might be time, relationships, comfort, money, or some combination thereof. Unless you're willing to invest in it, you're only going to get lukewarm results. Or worse — results that sabotage the very thing you're trying to achieve.

Let's return to our ‘competition'. You might be thinking that a competition for ideas is better than not doing anything for innovation, right? Maybe, though I have my doubts. My issue is not with the intention to engage people in the process of innovation but, rather, with the shallow, default way in which many seem to go about it.

Here's list of issues associated with competitions to find the ‘best idea':

  • Define ‘best'. Searching for the best idea glosses over the importance of a clearly defined problem. And is this idea we are searching for required to solve a problem in our current context, or an emerging future context? In other words, are we reacting to a problem, or are we being proactive in our strategy?
  • Best, according to whom? What agendas are at play here? Seeing as we are not taking an objective, scientific approach but, rather, relying upon the subjective judgement of others, we must ask: who does this idea serve? If we want to win this competition, what's more important: something that may secure new value and relevance for the enterprise (or business unit), or something that will appeal to the short-term priorities of those who are judging the merit of the idea?
  • Okay, great: here's a simple, easy-to-implement ‘tactic'. Real progress and change is the result of many ideas combined, woven, tested, unravelled and rewoven together until they work. To host a competition for the ‘best idea' — and to treat ideas as single units — means anything established is, at best, tactical. This may be useful in the context of a clear, present challenge. But if we're looking to build for the future, we require strategies that harmonise several tactics and many ideas. Such work is much bigger than simply coming up with the best idea.
  • And, oh look — time's up. Generating great ideas requires thorough thinking. Certainly, working together with a diverse team in a tight time-frame can help to generate creative thinking — constraints are useful in this regard. But doing this in one single session will favour fast thinking and ideas that are easy to comprehend. The classic divergent–convergent approach in a group context may also mean that consensus kills the best ideas (leaving us with a bunch of default ideas everyone can agree on). What's more: it's hard to decouple ideas from identities in this context. Here, we run the risk of people claiming ownership over ideas. If consensus or group-think sees an idea not make it to the pitch phase, the message to the individual is clear: don't stray too far from the norm. Stick to the default.
  • Too bad, losers. And so, people pitch their set of predictable and fairly lukewarm ideas. Voting (judgement) is conducted and a winning team is chosen. They are awarded with a prize: an extrinsic reward that contaminates any intrinsic motivation the team may have had towards innovative, strategic thinking. Did they participate because they care about the future relevance of the enterprise, or did they just want to win a cool prize? Hard to tell. Maybe both — but the unfortunate message is clear: we value that which gets rewarded. And where does the balance sit for most enterprise scorecards? That's right: the default, business-as-usual stuff. For the 90 per cent of folk who didn't win a prize, the message is also charmingly clear: your ideas weren't good enough. Thanks for trying, but better luck next time. And they will wait for a next time. That's right — wait. Is this the effect we want? To have our people wait to innovate, and to only engage in the process if an appropriate reward is offered? Or do we need to integrate this into how we work? I think the latter, and parts VI and VII share some thoughts on how to do this.

Many of the issues outlined in the preceding list can be extrapolated to the flawed notion of searching for the ‘best idea' in the first place. By seeking ideas, we gloss over the importance of exploration — instead latching on to notions of ‘best' from within our current context. We treat innovation as though it is a finite game to win, rather than the infinite game of delivering value and staying relevant.

Steve Jobs, the founder of Apple and Pixar, was once asked by Business Weekly, ‘How do you systemise innovation?' His answer: ‘You don't.'

This answer may have disappointed those looking for a neat and easy system to adopt — a magical solution to liberate folks from the uncertainty, angst and paradox inherent in pioneering innovation and growth. But the truth behind innovation is that no neat, proven systematic approach makes innovation happen for you today, or into the future. The approaches taken by others in the past may not work for you today.[5]

Rather than seek fancy new systems, let's instead focus on the one approach that's proven the test of time: science.

A SCIENCE-BASED APPROACH

Science, like everything, is flawed. But of all the flawed approaches one can take to pioneering through uncertainty, to advancing knowledge and discovering new pathways of possibility, it is the least flawed.

Of course, some argue that science can't prove anything. They'd be correct. For example, science cannot prove that the sun will rise tomorrow. But, thanks to a vast amount of evidence combined with sound reason, we can be almost 100 per cent certain it will. A scientific approach, when leading any pioneering strategy, is infinitely more preferable to a crystal ball, blind faith, a gut-feel or strong belief.

Many folks turn to science for certainty and conclusive evidence — but such an approach is an anathema to good thinking. Any conclusion closes the gates to curiosity — it's imagination, curiosity, doubt and wonder that drive scientific discoveries. Rather than trying to prove we are right, science seeks to find where we may be wrong.

Nearly everything we once thought true about the world has been proven to be wrong, thanks to science. We once thought the Earth was flat, but then discovered it was round. We once thought the Earth was the centre of the universe, but then discovered that we are just a blip in a vast and possibly infinite universe.

Again, science doesn't seek to prove things — rather, it seeks to disprove.

How does this work in the context of Quest-Augmented Strategy?

Well, take a look at your quiver of options and think about the most important emerging context. Consider which strategic option might be most valuable to your enterprise, given this emerging context. Pull said option from thy quiver, consider it, and then come up with a declarative hypothesis.

A hypothesis is a supposition, made on the basis of limited evidence, as a starting point for further investigation. It takes the shape of a testable- statement. Here's a relatable example: ‘If we replace email with an enterprise social media or communications platform for all internal communications, we will see an increase in staff productivity and collaboration, and engagement with cross-functional initiatives.'

Now, I've chosen an example that should be relevant to any enterprise leader today. This isn't a future thing. Email is the worst. But, also, if an organisation were to see that the retention of talent and the move to a more entrepreneurial culture is critical for ongoing agility and relevance, then this might also be a strategic option to serve that future.

In any event, through each of the provided examples we have a testable statement. We don't just jump straight to wide-scale execution — we need to see if this approach and our reasoning are valid.

And so, using the three pillars of science — observation, evidence and reason — we proceed to experiment. Figure 10.1 provides an overview of this process, along with a legend. We'll discuss each element in the following sections.

Flowchart shows the elements in experimental approach: Testable hypothesis leads to experiment which leads to “it didn't works” then check the methodology and hypothesis. If it worked by repeating the experiment to check hypothesis still valid then it leads to “what if” and finally leads to an “alternate option”.

Figure 10.1: The experimental approach

A | The testable hypothesis

This is the starting point of experimentation. When an option is pulled from a quiver, it carries with it the weight of a heap of conversations, overlapping possible futures, and the intersection of established and emerging trends. Getting lost in the complexity inherent within a strategic option can be very easy, and so a clear hypothesis helps to clarify and focus our enquiry. As ever, the overarching context here is to determine if this option is viable.

Part of this phase will involve looking at the literature and reference material accumulated as part of the exploration phase. Because we are at the edge here, it may be difficult to consult peer-reviewed publications (as a scientist might do), but by engaging with relevant peer authorities, we can clarify our experimental approach.

B | Crafting and conducting an iterative experiment

Small, safe, smart, cheap and fast — this is how we start our experimentation. At the simplest level, pacing through a hypothetical will allow you to rapidly experiment (or prototype) an option. Here, much like the personas and pathways we adopted in chapter 8, we map out and adopt the personas of the customers/users and stakeholders engaged with this experiment. We pace through how this option might work if we were to implement it — and through empathy and future-pacing, we identify a bunch of snags, pitfalls and traps we otherwise wouldn't have been aware of. These insights further help us to refine the design of our experiment.

At the next level, we might shift to wireframes and paper mock-ups — cheap props and other tools that might help us understand the user experience within the experiment. We anticipate resistances, and the questions or concerns other senior leaders may have about the option. We also seek constraints early, so we can enhance and improve the design of the experiment.

Next, we begin to understand the things we might measure. You'll note that we haven't asked the market what it wants yet. Keeping our customers and end users in mind throughout all elements of our experimentation is important, but at this stage we also need to factor internal stakeholders into the mix. What questions will they have? What ‘evidence' will they need to see in order to feel that the option is viable?

Evidence, of course, is always contextual.[7] Evidence-based practice in medicine takes a long time to establish, because the population is so vast, and the risks are so high. But, within the context of an enterprise, a comfortable level of evidence can be obtained more quickly because (for many options) you are not creating new medicine that could have nasty side effects. Rather, you're mitigating strategic risk or unlocking strategic advantage — and waiting is the riskier option.

And so, we gather evidence relevant to both our internal and external stakeholders, so as to assess the validity of our strategic option.

At this stage, we may be experimenting with real people. If you recall the hypothesis example I provided earlier — a ‘future of work' cultural experiment where we have no email for internal communications — we might be working with a small group of people over a short period of time. Again, think small, safe, smart, cheap and fast. Maybe we start with a small business unit of 30 and hold the experiment over a week. This experiment may include the adoption of a new internal social media or communications platform, combined with a new weekly work ritual. What do we find?

Or, if you recall our slightly more edgy ‘internal biometric intervention agent', our initial efforts might include analysing existing continuous glucose monitoring devices available in the market (along with similar devices), and mapping out the friction points and opportunities available. Then, a prototype app might be created — just a rough set of wireframes — to better understand how users might interact with the system in everyday life. Later, these experiments might include working with folks who have type 1 diabetes, and monitoring their blood glucose levels with the app. ‘Intervention' might be simulated at this stage, and plenty of learning may occur, but if the evidence suggests that people may be able to integrate this app into their everyday lives, research can shift to the development and early prototyping of the embedded ‘sensor'.

Or, in the case of our pioneering farmer, initial experiments may include overcoming the ‘brand' perception of insects. A ‘pop-up' restaurant in partnership with a celebrity chef might be created along with an ‘insect-only' menu. Such an experiment might help to determine if a viable market exists among gourmet restaurants.

At this point, we are not looking to create the perfect experiment. We're looking to create an opportunity to accelerate our learning and understanding of the viability of an option.

C | It didn't work!

How interesting![8] Your hypothesis being disproven may be due to one of two things. Either:

  • your methodology could be improved
  • your hypothetical stance needs to be reconsidered.

If, upon reflection, you believe your methodology was flawed, great! You're now wiser than before, and can design a better experiment. The enterprise context will always have complex variables at play — some of them you can control, and many you cannot.

In the ‘no internal email' experiment, maybe miscommunications occurred, or expectations weren't clear. Or maybe unanticipated frictions were encountered in the process of the experiment. Or maybe some assumptions were revealed to be incorrect. Or, heck, maybe the internet dropped out or some firewall got in the way. In any event, it's back to the drawing board for the next iteration.

Likewise, if the pop-up gourmet insect restaurant turned out to be a flop, maybe it was due to a lack of adequate marketing? Maybe people are actually comfortable with the notion of eating insects for protein — they just don't see it as gourmet. And so, exploring the viability of this option among other customer segments may be prudent.

If you're keeping experiments small, safe, smart, cheap and fast, the ramifications of setbacks should be low. You're not putting all your hopes into ‘one shot', and so you should be able to get onto the next iteration quickly and with relatively little fuss.

If, upon reflection, you believe your hypothetical stance needs to be reconsidered, great! In our context here, this means that a strategic option may not be viable after all — and so back to pioneering thinking you go (enriched with this new insight). And guess what? Thanks to your experimentation, you may have just saved the enterprise from making a bad strategic decision.

In any event, you've got options for meaningful progress and learning.

D | It worked!

Or, maybe … maybe it was just good timing, luck, an unanticipated positive variable or a quirk within the sample group. We're not sure if the option is viable at this stage — we just know that our hypothesis has yet to be disproven. Have we discussed the results with peers? Have we thrown enough at it? Have we accounted for all cognitive biases?[9]

Perhaps you can repeat the experiment with a different group, or different variables, and get different results? Or maybe it's time to scale up the experiment with a slightly larger group? This might be a bit more expensive, and a tad more risky, but if we're smart about it we can still keep it safe (and minimise the ramifications of what might go wrong).

Unless you have a very clear mandate from the CEO and senior leadership that communicates that experimentation, curiosity, learning and relevance are valued (as much as ‘business as usual' activities), chances are you'll need to do some ‘stakeholder management' here. This naturally gets easier the more influence and authority you have.

If you're an unofficial intrapreneur working with a small team looking to make a difference, you may need to reach out to senior leaders in your organisation to ‘sponsor' your efforts. This introduces a whole heap of potential complexities, though, depending on how savvy they are with Quest-Augmented Strategy. Be careful that you don't end up proceeding to simply fulfil their own needs — being instrumental is important if you want to gain traction with those who hold influence, but being instrumental to meaningful progress is more important.

Let's say that you're able to repeat your experiment and scale up your efforts. One of three things may happen. You'll realise that:

  • your methodology needs to be improved
  • your hypothetical stance needs to be reconsidered
  • you are now ready to throw some ‘what ifs' at your hypothesis.

We're familiar with the first two options (back to the drawing board), but the third option gets really interesting.

E | What if …

At this stage, we start exploring the peripheries and ‘what if' scenarios the chooks may consider.[10] Let's start with peripheral experimentation.

In my early days as an academic, I lectured at three different universities, simultaneously. I would find myself lecturing in different units — some of which I had no real expertise in.[11] One such unit was ecotoxicology. Now, I'm no ecotoxicologist, but I can tell you, having to lecture in this stuff did make me learn a thing or two. One element of ecotoxicology relates to the safety measures designed to protect ‘average people'. These will include guidelines for the allowable limits of chemical products or pollutants in the environment. However, because they were designed for ‘average people', these guidelines don't protect the more susceptible population groups that exist on the edges of the bell curve — for example, infants; the elderly; people with particular sensitivities; and people whose occupation exposes them to higher or more persistent doses, synergistic effects (such as when a cleaning agent is combined with hot water and inhaled via steam) or accumulative effects. All of these groups add to a hefty precautionary element — which is very important in the context of human health.

In the context of enterprise strategy and pioneering leadership, the precautionary principle needs to be tempered with risk-mastery. With technology catalysing change faster than ever before, hesitation can be just as dangerous as proceeding blindly.

And so, in our experimentation phase, we consider what might go wrong. Where may this become stuck, and how might we test (and learn) for this now?

If we think in terms of our first example — no email for internal comms — we may be able to anticipate some of the resistance, concerns, friction, snags, traps, tar pits and pitfalls along this new path.[12]

So some of our questions might include: How will this work for our frontline sales staff? Will this create extra burdens or friction for them — if, for example, they're emailing communication with clients but then having to switch platforms internally? What about security risks? What is our backup or contingency plan if something goes wrong? What if important communications get lost among the chatter? What if the Gen Ys jump on board and fill our feed with lol kittehs? No, seriously: how are we going to bring our more stubborn folk on board?

If experimental design can factor in these considerations (and if you're inviting real users into the process) you will — through repeated experimentation — be able to iron out many (but certainly not all) kinks in the option.

The result?

F | A viable alternative option!

Hurrah! Here, we've used progressive reasoning to observe new patterns, and evidence suggests that this option may be viable. Wootus maximus! This is the handover piece — the ultimate output from pioneering thinking and doing. One of the many Holy Grails we may find by questing into the unknown. Packaged intelligence to enrich, augment and inform strategic decision-making.

Is it guaranteed to work? No! But, just like science can't prove anything but can make confident predictions, by this point we can be confident it will work. This confidence, in turn, can allow enterprise strategy to be more courageous and pioneering.

IT'S ALSO NOT ABOUT SUCCESS

Just as this process isn't about ‘ideas', we also don't want to focus on ‘successful outcomes'. We want to get to a point where an enterprise can generate its own intelligence to stay ahead of the game and unlock new growth arcs and enduring relevance. But when an enterprise has a history of incentives and weighted scorecards geared toward short-term wins, and a management culture that comes down heavy on compliance, it can be hard to get this type of experimentation happening. The more we celebrate ‘success', the more we communicate what we consider to be worthy — successful outcomes.

But this only leads to more of the same outcomes — outcomes that serve our current context and not the effort that goes into securing new value and ongoing relevance.

Remember: in science failure, truth or success doesn't exist. There are only disproven hypotheses, learning and progress.

Let's explore how we can rethink failure to make more meaningful progress in your enterprise.

Notes

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.221.126.56