CHAPTER 19
RULE NUMBER TWO

Zero Accidents

During a visit to Hong Kong, I spotted a sign that grabbed my attention. It read “Zero Accidents on the Road, Hong Kong's Goal”. Next to that was an appealing logo of an anthropomorphised egg, complete with eyes, arms, and legs with a road wrapped around it. The logo, I later discovered, is called “Mr Safegg”. The egg shape represents “zero” while the road wrapped around the egg is in the shape of an S which is short for “safety”.

The sign was put there by the Hong Kong Safety Council (HKSC), who are behind the noble mission it promotes. It's an excellent idea, in theory. After all, who wouldn't want to have accident‐free roads? Answer: anyone who makes money from accidents, but let's move on. While most of us would like to have entirely safe roads, we also know that it's pretty unrealistic. At least for as long as you have humans driving vehicles.

Zero Tolerance

Many organisations adopt a similar approach when it comes to compliance and ethics. How often have you heard phrases like “we have zero tolerance for X”? It's so commonplace that it's almost a cliché.

As with Mr Safegg, it's nice to have something to aspire to. But if it's actually unachievable, then does it really make sense? And might there be a downside? This is where Rule Number Two comes in:

100% compliance is neither achievable nor desirable.

If you think that sounds outrageous, let me explain. I'm not saying you should allow people to do what they want. Nor am I saying that every single rule or requirement should be subject to a degree of tolerance. When I say “100% compliance”, I mean in aggregate. There will be specific rules where we do need 100% compliance. Or, as close as we can get to 100%. But there are others we really don't, or it's just not feasible.

So, 100% compliance is an impossible dream because the people you're trying to influence aren't entirely reliable. That's not a criticism; it just means they're human.

People Are People

Much as you might like this not to be the case, the people you're trying to influence are fallible. Not because you've hired terrible people, but because they're human. As we saw when we explored the basics of BeSci, the human algorithm isn't perfect. Every single one of us – even you, dear reader – makes mistakes, breaks laws, circumvents rules, and tells lies. Not all of the time, of course. More often than not, we won't do those things. But sometimes –probably more often than we might like to admit – we will.

Brian Cullinan is a prime illustration that even the best‐intentioned people with lots of experience can and do make mistakes (see the Introduction). Indeed, it is often their experience that blinds them. There's a paradox that comes with being in a senior position. On the one hand, junior or inexperienced people will make mistakes because of their lack of experience. Yet the very experience we think is lacking in junior people can blind senior people and cause them also to make mistakes.

Writing rules for Mr Logic, the cartoon character I referred to in Chapter 5, is easy because he'll respond in an entirely predictable, rational manner. Writing rules for ordinary people is much more challenging because we evolve and learn from experience. What we feel about something today might be very different to how we feel about it in a year. Unlike Mr Logic, we'll have a human response to what we're told to do.

The World Is Changing

We're also not operating in a static or stable environment. This means that when it comes to solving problems or delivering business success, there may not be a helpful precedent or “playbook” answer to which we can quickly turn. In many cases, trade‐offs will be necessary. The “right” answer may demand that people override conventional wisdom or break the rules. History is littered with famous names like Sir Isaac Newton, Galileo Galilei, and Rosa Parks, whose undeniable contributions to progress saw them do precisely that.

The fact that we're dealing with sentient, fallible beings operating in a changing environment poses a huge challenge to people who need to write or enforce rules. Even the best rules writer will likely struggle to predict how people might react to their rules and the circumstances in which those rules might be applied. A rule designed for an analogue world might not work in a digital one.

So, if 100% compliance is impossible to achieve, what can we do about it? The good news is that sometimes we don't necessarily need to do anything. In fact, a certain level of noncompliance might actually be desirable.

Hiring Humans to Be Human

Counter‐intuitively, the reasons we hire people in the twenty‐first century, particularly in the Knowledge Economy, may make them less compliant. As technology evolves, what we ask people to do in the workplace is changing. Tasks that are repetitive and predictable can be given to machines, which are cheaper, better at them than we are, and don't need health care or holidays.

That means we're hiring people to do the things the machines can't (yet), tasks that involve skills like nuance, judgement, and emotional intelligence. While these skills bring out the best in humans, they can also bring out the worst. They're also often the skills that help and encourage people to challenge, bend, or break the rules.

Hiring people to be creative or disruptive means they may also apply that creativity and disruptiveness to the rules they're being asked to comply with. Telling them that we expect 100% compliance if that's genuinely impossible risks us losing credibility. If we're hiring smart people, we can expect them to sense when we're being unrealistic and react to it. And if they're there to innovate, then being compliant may well feel like the antithesis of what they've been hired to do.

Counter‐intuitively – and just between us – having a few breaches on the record might actually be helpful. If no one ever breaks our rules, how do we know we've calibrated them correctly? Perhaps we've allowed our population too much latitude. We shouldn't encourage it, but having people occasionally bend or break our rules can help make sure we've got them right. As we'll see later in the book, there are ways we can use that to our advantage. Of course, that doesn't mean we should tolerate all errors or breaches. As we'll see, there are rules, and there are rules!

Recoverable vs Irrecoverable

So how can we balance the realism of recognising there will be breaches and errors and a need to prevent the worst possible outcomes? For that, we need to turn to Netflix. Not to stream a video but to the company itself.

In 2009, Netflix published a 129‐page set of slides entitled “Freedom & Responsibility Culture”1 in place of a staff handbook. The slides – which must rank among the most downloaded PowerPoint slides ever – were created by Netflix founder Reed Hastings and then Chief Talent Officer Patty McCord. Both have subsequently written books about it, which I highly recommend.

In among slides that explain things like Netflix's hiring and remuneration policy, there are some which outline their approach to compliance. The “c word” doesn't feature, but that's the substance of it. It's a little radical, so it won't be for everyone, but in the spirit of challenging ourselves, I think it's thought‐provoking.

Their philosophy is to separate errors into “recoverable” and “irrecoverable” errors. The former are things they'd rather not have happen but from which – as the name suggests – the firm can recover. The latter are things from which it is hard to recover: either “irrecoverable disasters”, for example, financial statements containing errors or where hackers steal customer credit card information. Or they are “matters of moral, ethical, or legal principle”, for example, preventing dishonesty or harassment.

Implicitly, this means having zero tolerance for irrecoverable things and a relatively high tolerance for recoverable things. This approach leads them to distinguish between a “good” process, which “helps talented people get more done”, and a “bad” process, which “tries to prevent recoverable mistakes”. As a result – and here's where it gets really interesting – this means the only types of rules you need are those which prevent irrecoverable errors.

I think this is a fascinating approach. Not only is it realistic in recognising things will go wrong, but also that not all rules are equally important. I'm not suggesting that Netflix's model is perfect. In many environments, it won't work. But there are some things we can learn from it.

In Aggregate, Not in Isolation

This brings me back to the rule. When I say that 100% compliance is impossible, I mean in aggregate. I'm not suggesting that we tolerate all mistakes. But we need to accept some. It is perfectly possible to achieve 100% compliance with a particular rule if we throw enough at it. If that weren't the case, we'd regularly be seeing instances of nuclear power stations blowing up. But in a world of finite resources, we can't do that for every rule. Our employees know that as well as we do.

This is why I like the idea behind the Netflix approach. Being open about the distinction between breaches of rules that are recoverable and those that are irrecoverable helps us to manage resources and staff attention. Because for those areas where 100% compliance is not just desirable but necessary, we will need their help to deliver it.

Back to Zero Accidents

This brings me back to the HKSC and their Zero Accidents Policy. I have a sneaking suspicion the HKSC knows it won't get zero accidents. But it sends a powerful signal to stakeholders, particularly their staff, of their primary objective. Zero accidents isn't an irrecoverable error for HKSC, but the philosophy is the same; they're making clear what really matters and focusing on that.

Note

  1. 1 https://humanizingrules.link/netflix.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.132.193