11


Mining the data

For a body to remain healthy, its receptors and sensors need to be passing and receiving messages freely, speedily and accurately. So it is with a business. You have to know what’s going on, so good management requires good information. And information is everywhere. It is the spindrift on the wind in the world in which we work, the plankton in the sea where we try to get things done. Steven Pinker in his book How the Mind Works says ‘information itself is nothing special: it is found wherever causes leave effects. What is special is information processing.’ Information will always be there. Equally, more information will always be available; more that may be useful or that you may not be able to cope with. What is certain is that there will always be enough information to give you some insight and to act upon if you process it and use it. For any business, any organisation, any project, any management task, this throws up a series of requirements.

The devil is in the detail

  • In 1999, NASA sent a probe to Mars. It was supposed to orbit and then land on the planet. But at the end of its descent, it crashed and was destroyed. The analysis of what went wrong showed one simple reason: someone working on the orbit and descent path had assumed the measurements were in metres. In fact they were in feet. Result: disaster. Billions of dollars and years of effort wasted.
  • BAE was building a $2 billion dollar ultra-modern submarine, the HMS Astute, for the UK navy. When it was tested in December 2007, what one newspaper described as ‘its most basic part’, its oil pump, failed, causing huge damage, massive extra costs and major delays.
  • On 15 December 2008, a headline in the London Evening Standard read: ‘£2 fuse brings £9 billion upgrade of West Coast [rail] line to a halt’.
  • My last example is perhaps the most intriguing (and scary) of all. When the Large Hadron Collider at CERN in Geneva was being commissioned with the aim of finding the ‘God’ particle, the Higgs boson, it had to be shut down because the flow of particles was interfered with by a bird flying into the machine. This caused a six-month closedown to effect repairs at a cost of many millions, but one speculation had been that an interruption in the intended working of the machine might create a mini black hole in the space–time continuum, into which the whole earth might have been sucked. Small mistake, big consequence! Fortunately, the consequence didn’t follow.

There is a widely held view that leadership and the role of senior management is about the bigger picture. In this chapter I will argue that this approach is fundamentally wrong. Understanding the smaller picture, if necessary the microscopic detail, is crucial to our understanding of the bigger picture, because otherwise the big picture will be the wrong picture. None of the disasters above would have happened had details not been overlooked. So:

  • learn to love detail;
  • attend to it, search it for what is significant;
  • focus on that, use it to inform your understanding and actions;
  • finally, use that significant detail to help shape your strategy.

Finding the kernel of truth

Frequently, people tell you they don’t have the information they need to manage an issue. They don’t know what is going on and they can’t. Never accept this. If people really were that ignorant, they wouldn’t know there was a problem.

MRSA

In a group of hospitals that were having big problems with the superbug MRSA, senior managers told me they couldn’t do anything about the problem beyond what they were already doing. It was the sort of community where you got MRSA; it came into the hospital and they simply had to put up with it. They had no data to back this up but they firmly believed it. I asked for information on each service in their four different hospitals. These showed marked differences, for which there were a series of possible explanations, each more plausible than the last, and all suggesting that major improvement was possible. As a result, management was liberated to look at the real causes of the problem, real causes that it had some chance of fixing.

Many organisations and their managers have access to good information but they don’t process it and use it to evaluate what is going on. In particular, they rely on overall figures. This is all right sometimes, but not generally. We need to be aware that we may be putting together essentially different things, apples and pears, which don’t add up. Aggregation relies on averages even when the variations and differences are crucial. It’s like having your head in the oven and your feet in the fridge, and saying that your average temperature is normal. Aggregation can be dangerously misleading. Split it and you will get to the kernel.

Back trouble

A hospital having enormous problems with orthopaedic waiting lists had convinced itself that it didn’t have enough capacity to reduce them and that therefore it needed a major increase in resources to do so.

On looking closely at its data, it was obvious there was a problem but it wasn’t about resources. There were enough people, enough resources, enough time to deal with most patients presenting with most symptoms. The real problem was a small number of patients with very difficult back problems and only one person to treat them, with nowhere near enough time. Once that was understood, it was a matter of rejigging the balance of work to free up the time of the person with specialised skills and, when that still left a much smaller shortfall, farming out a very small number of patients to other centres which had those specialist skills and were prepared to take on the work at the right price.

Result: problem seen to be much smaller than thought, solution obvious, problem solved.

Having information available is a prerequisite to analysing it. Understanding what it means is one key to action. Another vital matter is sharing it. If information is communicated to those who can appreciate its meaning and significance, that in itself may be sufficient to create action.

During my time as CEO of Poole Hospital in the 1990s we extracted information about how each doctor used their time in outpatients, when they started, when they finished, how many patients they had seen, and so on. We shared this information generally – not only the individual’s figures but the figures of the other doctors doing the same thing. This made a huge difference. In many cases people were unaware of exactly what they were or weren’t doing. Simply seeing the information caused them to identify weaknesses and inefficiencies in their practice, which they readily eliminated. In other cases, seeing that colleagues were quicker or more productive led to competitive or shamed changes in behaviour. Where the difference persisted, it provided an opportunity for action with the individual that could be seen by others to be reasonable and fair.

A case of competitive change

An outstanding neurosurgeon I knew told me how he had been keen to make his practice more efficient and save money. In common with his colleagues (as far as he knew), his patients stayed in hospital for 10 days after one operation he performed frequently. He felt there were bound to be all sorts of ways he could improve efficiency at the margins and reduce this, perhaps by 10%. He went along to an improvement session, hoping to learn how. A fellow surgeon spoke and explained how his patients stayed in for just four days after the operation in question. After recovering from the shock, my neurosurgeon friend realised that he could take a much more radical look at his practice than he had dreamed possible because someone else had done it successfully. Within a short time, his patients were staying in for only four days too.

Approximating

You need to understand how accurate information needs to be. In some fields and at certain times, it needs to be totally accurate. To take the Large Hadron Collider again, an experiment in 2011 involving the most minutely precise measurements showed some particles travelled from Geneva to a location in Italy slightly faster than light can travel, which is impossible. Concern over this result was finally resolved when it wasn’t repeated and the source of the apparent excess speed was found.

However, for most managers for most of the time, securing and using information is not about knowing everything perfectly. In the day-to-day managerial world, that sort of requirement would be a basic error and a recipe for paralysis.

Nor is it about knowing what’s right overall, the aggregate answer, as I explained above. That conceals the vital importance of detail. What it is about is approximate truth. If you have searched, been through the detail and obtained a ‘good enough’ answer, an explanation which stands up, then act when you need to, go with it.

Terminal 5, Heathrow

The fiasco of the opening of Terminal 5 at Heathrow Airport in March 2008 is informative.

The central problem was the loss of baggage. Volumes of baggage vary from hour to hour, day to day, plane to plane and passenger to passenger. These variations partly get smoothed out by statistical averaging, but also result in predictable peaks and troughs, for which capacity can be flexed and staffing altered. However, there was consistent evidence over a period of about a year that British Airways’ baggage losses at Heathrow were among the worst in the industry worldwide, and not improving. Moreover, the training that had been undertaken looked to have ticked the boxes but in fact was inadequate: handlers didn’t know what to do and security and baggage handling were working in opposite directions with no clear priority between them. BA seems to have assumed that technology would solve the problems, but the reality was that the system was simply not fit for purpose. Given all this, the disaster that happened was inevitable.

It could so easily have been avoided if BA had disaggregated the parts of the problem, mastered the detail and approximated. With approximation, you gather what information you can and act on it, even though it is partial and imperfect. Approximate information isn’t once and for all: it is good enough information to get you going. Once you have better information, you can act again. Mastering detail and approximating would have enabled the system at Heathrow to be fixed before Terminal 5 was opened, and provided a system that would have worked well enough at the opening. That system could then have been refined and improved early on, without the breakdown that actually occurred.

Using information for performance management

If the information isn’t owned, if no one takes it as theirs or their responsibility, it can simply lie in the ether. In such circumstances, many pieces of information, even if they are vital, will not find a natural unequivocal owner. They may very well involve a whole lot of people, and a poor output or outcome may be ascribable to any one or a whole number of actions and actors.

Performance management in practice

Poor performance often relates to a failure to meet demand and a linked lack of capacity. In a business I became involved with, people assumed they were managing sufficiently well, with the odd wobble. When things started to go wrong, the available information, which until then had been largely ignored, was seized upon by managers up and down the line, who each interpreted it from their own standpoint and set in hand action to put things right. The trouble was that it wasn’t coordinated and it wasn’t entirely consistent. So a number of responses happened at once, some of which replicated each other and some of which contradicted each other.

The way they got over it was simple: performance management. Each manager’s responsibility to monitor what was going on was specified, and it was made clear what action it was their responsibility to take when action was needed. At each level they looked at the information needed, its regularity, the nature of the monitor and their responsibility, so that these never overlapped or duplicated, and no requisite action went unaccounted for. For the first time, the responsible operational manager knew exactly what was expected of him while the Chief Executive was also clear about the overall achievement and could signal when things were moving off track and required attention. As a result, the apparently chaotic capacity shortfall disappeared and management action became much swifter and more focused.

Turning the world upside down

You have passed the first test, getting the information. You have passed the second test, questioning it, seeing what it means and trying to use it, but you are still stuck. Next try to turn the information round. Look at it through different filters. Here are two examples of what I mean.

Crew management

In my current hospital, Great Ormond Street, a paediatric surgery team was looking for ways to reduce the time it took to affect the nine-minute handover of very vulnerable, very dependent patients immediately after an operation, from them to an intensive care unit (ICU) team. They were looking around for inspiration. Two doctors suggested that a great example of ‘crew management’ was in Formula 1 motor racing at pit stops, where both speed and absolute accuracy and reliability were of the essence. They contacted the McLaren and Ferrari teams (and aviation captains, people from another industry with something to offer) and persuaded them to work with them. The result was the development of a four-step procedure which halved information loss, reduced handover time by over a minute, and produced fewer technical errors.

When I arrived at West Herts, the government were using ethnic monitoring information to help eradicate any racial discrimination towards and between patients. It was judged a key yardstick of commitment and intent. No special efforts or measures had been taken to achieve it, on the grounds that many patients didn’t want to reveal their ethnicity. In other places, with the right approach, combining sensitivity and persistence, and training all the front-line information-taking staff, it was being achieved, so it was clearly possible. I had brought a new discipline and rigour to target identification and achievement from the word ‘go’, so staff were now alert to the target’s importance and the need to do everything possible to achieve it.

The problem was that we were now almost two-thirds of the way through the year with an achievement of 30% to date and a target for the year of 80%. Simple arithmetic showed that 100% achievement for the rest of the year wouldn’t do it. Nonetheless we went for 100% and started to get near it, week by week. It was at that point that the lead Director realised something that everyone else had overlooked. Although most patients in the first seven months had been missed and although they were no longer our patients, their ethnic status hadn’t changed and we could still contact them. So that’s what we did. We set up a project and worked out detailed, systematic procedures which would reach all our former patients, using the new, effective techniques we had developed since my arrival. Week by week the percentages went up and by the year end we achieved over 80%. We achieved the impossible!

This story circulated round the hospital and beyond, and undoubtedly helped people to try new ideas and generate new solutions to problems that they would previously have given up on. This was now a place where (practically) anything was possible.

To sum up …

If you are vigilant, if you look for problems, then you will need useful information. But the usefulness of the information will in reality depend upon the use to which you put it. Time and again I have seen organisations and systems in difficulty which either didn’t have information, and so could not manage, or didn’t use the information they had, and so did not manage. It is crucial not only that information is gathered, but that how it can be used is thought through. So ask:

  • How often is it necessary to check whether a particular process or output is on target?
  • Who should do this?
  • What action should be taken if it is off target?
  • Are there systems that ensure this happens as a matter of course?
  • Are those running the systems accountable for making these monitoring assessments and then acting if problems appear, if things go off track?
  • Do they fully understand this?
  • Is there a tight chain of information flow and response through the organisation from the delivery point to the most senior authority?

If these questions cannot be answered in the affirmative, and specifically, then the organisation is at unnecessary risk. On the other hand, if you have information and use it to manage performance in a regular way, then you will start to understand better the patterns you are seeing and the changes in them. It is vital to have or develop a mentality which sifts the detail that you gather for significance and highlights and checks something out of the ordinary to see if it is indeed significant or merely an aberration.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.148.117.212