CHAPTER 29
R IS ALSO FOR REMARKABLE

Introduction

The fifth and final radar in the framework is “Remarkable”. Like the other radars, it takes insights from our employees' behaviour. However, unlike the other radars, its focus is not on understanding their behaviour. Rather, it is there to help us to better understand how we are designing and implementing our framework of rules.

The radar uses something we would probably otherwise not think to look at; rules where we find unexpected levels of noncompliance; either rules where noncompliance rates are unexpectedly high or unexpectedly low.

While rules with high levels of noncompliance in absolute terms will already be identified under “Rebellious”, “Remarkable” identifies rules that have unexpectedly low (or high) levels of noncompliance. In other words, levels of noncompliance that we weren't predicting.

Rationale

There are two main reasons for looking at rules with “remarkable” levels of noncompliance:

The first is to assess the design effectiveness of our rules. By comparing how we thought a particular rule would “perform” with its actual “performance”, we can assess how effectively our rules have been designed.

The second is that it can help to sharpen our behavioural antennae. If we misjudge our employees' propensity to comply with a particular rule – by either over‐ or underestimating it – then understanding why can provide valuable insights.

Two Unspoken Truths

This radar builds on ideas we have explored throughout the book. The key one is that rules can and should be treated as attempts to influence human decision‐making. On the face of it, the word “attempt” seems inappropriate. After all, rules are mandatory and backed by an authority, so we tend to expect people to follow them. But we also know that not all rules are followed all of the time; if they were, there would be no need for this book!

For these purposes, I think it is helpful to use an (admittedly loose) analogy of rules being akin to advertisements. Both are forms of behavioural intervention. Just as advertisements are attempts to influence people to buy products or services, rules are attempts to influence people to behave in a particular manner. However, unlike rule writers, advertisers are more likely to recognise the experimental nature of their work. Before launching an advertising campaign, advertisers will have some idea of what impact they expect the advertisement to have. Not least because they'll need that to justify the expenditure of producing and running it. Then, once the campaign has gone live, they can compare the actual performance with what they were expecting. Companies don't launch advertising campaigns without some idea about how effective they'll be.

Like advertising campaigns, rules also have a cost associated with them; it's just not as obvious. For every rule, there will be an effort required of employees to comply with it and an effort in implementing compliance initiatives to maintain and enforce it.

To do this effectively requires us to acknowledge two generally unspoken truths about compliance. They're things that marketers understand intuitively when it comes to advertising, but we tend not to think about in a compliance context.

The first unspoken truth is what we saw in Chapter 22; when lots of people fail to comply with a particular rule, it's a rule problem, not a people problem. When we “craft” rules – as when marketers create advertisements – they aren't always perfect. So, when employees fail en masse to comply with our rules, it isn't entirely their fault!

The second unspoken truth is that even when we “craft” our rules perfectly – at least from a theoretical perspective – they don't always work the way we expect them to. In Chapter 12, we explored the concept of “a fine is a fee”, where rules intended to deter a behaviour encouraged it. Just as adverts don't always achieve their expected result – even with the best market research in the world – the same will also be true of rules.

Note that word “expected”. When I say that the rule hasn't achieved its expected result, I don't mean that the rule didn't achieve 100% compliance, which, as we know from Rule Number Two, is not always feasible or desirable. I mean that it didn't fulfil a realistic expectation of what it would deliver.

What Is Realistic?

So, if 100% isn't always realistic, what is? Obviously, the answer will depend not just on the rule, but also on how it is perceived by employees. To repeat the mantra that has appeared throughout this book, we need to think not about how we would like our employees to perceive it, but how they are likely to perceive it.

By way of example, imagine that we introduce a rule that every member of staff will be required to take an extra paid day of holiday a quarter. In most cases, this rule is likely to be positively received and we should expect very high levels of compliance with it. It is unlikely to be 100%, but in the absence of, say, pressure from line managers not to take it, it should be very high.

Then imagine we introduce a rule that bans employees from bringing personal mobile devices into the workplace. That is likely to be far less popular and it would be reasonable to expect far lower rates of compliance.

Of course, the specific circumstances of the workplace in question will be highly relevant. People working for employers where there is huge pressure to meet sales targets might not want to take an additional day's holiday. Staff working in highly sensitive workplaces may well easily comply with the rule banning mobile devices. But it will, on some level, be possible to have a view as to how likely employees are to comply with a particular rule.

The idea behind the radar is that we formulate a hypothesis and then compare what happens with what we expected to happen. If compliance levels are unexpectedly high or unexpectedly low, then we should investigate why. Say rates are far higher than expected; is that because people are actually complying or because we are measuring the wrong thing? Or perhaps they have found a loophole. Or did we misjudge the likelihood of compliance? Whatever the explanation, we can learn something useful by exploring.

Expectation Management

In order for this radar to work, we will need to have an idea of the likely level of compliance. There is no hard and fast rule about how to do this, but here are three suggestions for ways of thinking about it:

The first is on a “rough‐and‐ready” basis by looking at known compliance rates and asking policy and rule owners to identify anything that has surprised them. While this might yield results, it offers the obvious incentive for them only to identify policies where compliance levels are higher than expected and ignore those that are lower. It also requires them to be honest about what level they expected.

The second way – which requires some effort and planning – is by asking policy‐ or rule‐owners to estimate the difficulty of complying with their policy or rule when it is initially launched or as part of a review process. This can be done using a simple scale – say, one to five – and then using that as the basis for comparison.

Obviously these first two measures are highly subjective. But it is worth remembering that we're not looking for scientific analysis; we're just trying to get a rough idea, for the purposes of encouraging those creating the rules to think about “compliability”.

We also need to create an environment where the exercise isn't about being “right” or “wrong”, but instead is about learning. What we want to avoid is incentivising those responsible for writing the rules, to “game” what we're asking them to do! That would be ironic!

The third and more objective method is to use peer group analysis and look at how one rule performs relative to its peer rules. That peer group might be rules covering a similar topic – for example, all rules on anti‐bribery and corruption – or it might be rules of an equivalent length, or which apply in similar circumstances. For example, we might compare all rules related to foreign travel since they will all apply in the same context.

In doing this, we can identify where our expectations of how the rules would be complied with haven't matched up to reality. We can then identify why and see what lessons we can learn. Not just about our employees, but also about our own behavioural antennae.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.226.93.163