CHAPTER 5: WHAT THE BUSINESS NEEDS TO MEASURE

He who pays the piper calls the tune. Origin obscure.

Let’s start our more detailed look at measurement on the client side. Any business will be interested in whether:

  • it is getting the service as contracted (or as detailed in the SLA);
  • value for money is being achieved;
  • the business and its users are happy and getting what they need;
  • problems are properly dealt with;
  • changes, small and big, including ‘projects’, are handled effectively;
  • IT plans allow the business to keep ‘a step ahead’.

This is not a definitive list for all businesses. Many will be interested in other things besides.

To be in control of these requirements, the business needs to have regular measurements or assessments of how things stand and how they’re trending (e.g. getting better, getting worse, erratic and the like).

For illustration, Figure 4 shows an example report on compliance with an SLA.

image

Figure 4: Example SLA compliance report

A board member comments:

Don’t talk to me about IT measurement. We’ve been using this IT company, let’s call them ICTPROVID, for years.

We keep hearing about radically different IT infrastructure based on Cloud Computing, apps, open standards, smaller projects and a lot more SMEs in the supply chain. I can’t see much sign of it. For us, it’s very much a case of ‘same old, same old’.

What I’d like to know is how the prices we’re paying compare with what other organisations using ICTPROVID pay: are we all in the same boat or are some of them getting better commercial arrangements? I’d like to compare us with organisations that aren’t using ICTPROVID, too, to see if they fare better. I know they say you can’t just compare one client organisation with another by looking at cost per user (what they used to call cost per desktop before we all migrated to laptops), but the comparisons I’ve seen suggest we’re getting a shabby deal. Admittedly, things have got a bit better since we renegotiated with the company, but there’s still a way to go.

It’d be interesting to look at customer satisfaction because we hear a lot of grumbles about there being too many niggling faults and that they aren’t being resolved quickly enough. You always get grumbles about IT, but we could do with some stats to confirm whether there’s actually any substance to the complaints about ICTPROVID. I know some of our users feel the contract doesn’t give them what they need, so some of the niggles may be in grey areas where the company’s performance complies with the contract, which means we’re between a rock and a hard place, as we’re unhappy to live with it, but don’t want to pay through the nose to get it changed.

We’ve been thankfully free of the IT project disasters that have afflicted some public sector organisations, but, at the same time, you can’t rely on them to deliver what they say they’re going to deliver at the time they promise. Again, I don’t know how we compare with others and it would be useful to have some meaningful stats to let us know.

Contract management seems to be a black art, with the commercial and IT folk united in reassuring the Board that ICTPROVID fully complies with the terms of the contract. That makes me question the effectiveness of the contract. I’d like to see the facts and figures.

Let us turn to that well-known phrase or saying: continuous improvement. Is ICTPROVID performing better for us this year than last? Is it performing better for others this year than last? Are we getting enough year-on-year improvement? Are others? Can we find out?

A business manager in the same organisation comments:

I’m reasonably happy with the service we get from ICTPROVID, which makes me a bit of a rarity around here. The IT service is mostly quite reliable, I’ve got a nice laptop that’s easy to take on the train although the battery life isn’t much cop, and we now have e-mail access when we’re travelling, which is a major step forward. The facilities I use, like e-mail, word processing, spreadsheets and presentation slides, all work smoothly. The finance and HR systems are a bit of a pain, with yet more passwords to remember, but that isn’t ICTPROVID’s fault.

Where I do have a complaint with ICTPROVID is when things go wrong. We often work to very tight deadlines and we’ve had a couple of instances in the last six months when the service has died on us at the worst possible time. One of the failures lasted six hours and the other one was the best part of 20 hours. That time, ICTPROVID ended up having to bring people in from elsewhere in the country. Now they did send out their account director, a really nice person, to explain what had gone wrong and what they were doing to make sure it wouldn’t happen again. To the account director’s credit, it hasn’t. But when you’ve been let down that badly, it leaves a sour taste in the mouth.

Another thing I find disturbing about them is their service desk. The people are friendly enough; that’s not the problem. It’s just that they’re inefficient. For instance, we came in one morning to find we couldn’t access emails or the Internet, so the service was next to useless. I phoned the service desk and they said they would log the call. When I pressed whether they’d heard about the problem, they admitted they had, which was just as well as my friends sitting beside me said they’d already phoned in about the problem. But here’s the rub: the service desk were waiting for the number of calls logged about the problem to reach, say, 20 (I can’t remember the exact figure) before escalating it to category one. I can just about see their perspective; one user experiencing this problem could be down to that user. But three or four users with the same symptoms: wake up and smell the coffee!

Another organisation’s board member comments:

We run our own IT department because we don’t trust anybody else to do it and, frankly, it’s too important to the business to do anything else. With an in-house IT department, we can control the direction of IT in the company: we decide what it does and how much we’re prepared to spend. We can say ‘jump’ and expect to be asked ‘how high’? We say how much we’re prepared to spend and we decide whether we’re getting what we pay for.

I’m not saying we’re by any means perfect. The IT folk have a bad habit of changing things without telling you, which would be fine if their changes didn’t affect us, but you end up wasting time chasing answers to niggling faults or to changes to the way you have to use the system. To their credit, they’re pretty good at fixing problems once identified, so we don’t get a lot of repeat faults.

The service desk is friendly and efficient, so if you do have a complaint or query it’s dealt with politely and effectively.

Strategic-level changes and projects are agreed at Board level. Sometimes they’re proposed by the IT department and sometimes they come out of strategic reviews. It means the business and the IT department are singing from the same song sheet, so the conditions are set for projects to succeed. That’s not to say projects always come in to spec or time or budget. But on the whole we’re satisfied with the way the business handles IT projects.

Measures for the business: checklist

Note that when IT is externally provided, what the client can require to be measured and what it can demand to be done about the findings will depend on what’s in the contract. Regardless, the business should ask for a regular report on contract or SLA compliance, which should draw on the measurements discussed below.

Service availability

What

  • The percentage of scheduled time that service is available for use.
  • This may include figures categorised by location, where the service provision covers more than one site.
  • This may include availability of each important application, measured as a percentage of scheduled time for which the application is available.
  • This may include a separate measure of down time (or down time per location, or per key application) over a certain threshold. For example, the business may only require a report on service outages of over one hour, measured along these lines:
    • frequency and length of business-critical outages of one hour or more.
    • At the same time, if location A would struggle after 30 minutes of down time, the business could ask for a report on location A service outages of 30 minutes or more.
    • If the loss of application X was business-critical after four hours, a report on the frequency and length of application X outages of four hours or more should be requested.

When

  • Timing and frequency will depend on how big an issue availability is or has become. During a provider’s first few months, statistics may be needed as often as once a week and covering a period as short as a week, whereas once the service is stable, monthly or quarterly figures may suffice. Equally, the frequency of gathering statistics may need to be ratcheted up and the service interval to be assessed shortened, if the service becomes unstable or at times of significant IT change.
  • Figures 58 show various ways of measuring and presenting service availability over a full year for a service scheduled to be online 168 hours a week, which was afflicted by two outages of 20 hours and six hours respectively. A longer measurement period of, say, three months, will give a smoother picture, with the blips ironed out, so making it easier to understand trend information (direction of travel). A three-month measurement period rolled forward every month (every four weeks) will provide overview and trend information faster than a set of back-to-back three-month measurements. But regardless, such a long period can mask availability problems that cause the business aggravation, and in that circumstance these measurements would need to be supplemented by more granular figures.
  • Table 1 shows the incidence and duration of service outages, for the same service. The client will probably want to see these only if availability is problematic.

image

Table 1: Incidence and duration of service outages

image

Figure 5: Availability chart: four 13-week periods

image

Figure 6: Availability chart: 13-week average rolled forward every four weeks

image

Figure 7: Availability graph: 13 four-week periods

image

Figure 8: Weekly availability chart

Other service problems

What

  • The measurements will comprise a summary of incidents affecting the service: organised by severity/priority and, if required, by problem category (e.g. security, capacity). The number of incidents and the time to close against target should be included for each severity level. It often has an accompanying explanatory report.

When

  • The measurements will be best presented at the same time interval and covering the same duration as availability (typically monthly but can be quarterly for a stable service and can be more frequent for an unstable service or part of the service).
  • Table 2 and the graph in Figure 9 show service incidents for the example service, in categories: security, capacity, configuration and release management. For simplicity, only one severity level is shown. The text accompanying the graph in Figure 9 is a typical client report explaining the incidents.

image

Table 2: Service incidents by category: example table

image

Figure 9: Example incident report: weeks 1–8

Customer satisfaction

What

  • Some means of assessing customer satisfaction with the service, typically on a five-point scale, such as: very satisfied, satisfied, neither satisfied nor dissatisfied, dissatisfied and very dissatisfied.
     
    The survey can be undertaken by the client on its own behalf. It really needs to be accompanied by open questions about what is and isn’t good, to inform the client’s management of its relationship with the provider. Inadequate communication between the provider and the users is a common cause of dissatisfaction. External and internal users can be covered by a combined survey or by separate surveys.

When

  • This should typically be undertaken once a year, although twice a year or more often may be appropriate if the client’s or users’ satisfaction with the provider is low.

IT costs

What

A capable client will know from its finance department how much its IT is costing and may be more interested in statistics showing what (how much) it is getting for its money.

  • For service operations, other things being equal, the cost per user provides a good indication of cost efficiency compared with others and compared with yourselves last time you measured it. Be careful to compare like with like, and include all the IT provider costs, unless there’s a good reason to exclude them.
  • One thing to single out by means of separate statistics is the cost of projects and other service changes, as the business needs to know how much it is paying for IT change without burying it as part of service operations. Where the business requirement is stable, the cost of change should be low. Particularly for projects, both provider and client costs should be of interest.
  • Depending on the type of business, it may be illuminating to consider how well the breakdown of IT spend for normal operations and for IT-enabled business change aligns with the business’s direction of travel. Thus, a smaller proportion of developmental spend this year compared with last in an organisation seeking IT-enabled advantage might suggest questions need to be asked.

When

  • If IT budgets are under control, there is no need for the client to measure costs every five minutes. Depending on the provider’s financial discipline and on service stability, a quarterly or even an annual report could be appropriate. If there is a lot of IT change, then more frequent measurement may be needed, perhaps as often as monthly, especially if the client is concerned about the cost of change.

Handling of change: projects

  • Given the legendary costs of IT project failures, probably the two most interesting things to measure about projects are how much they cost (as above) and whether they’re successful. What constitutes success isn’t always as clear-cut as delivering to requirement, to time and to budget. These statistics are needed, but may need to be supplemented to reflect business judgements as to whether a slightly reduced scope or a slightly later delivery could be acceptable.
    Post-project statistics on problems and changes affecting projects should be gathered for the benefit of future projects. Measurement of projects in-flight tends to follow a well-trodden path involving progress against milestones and budget and, often, statistics on problems and changes.
  • In an organisation that undertakes a lot of projects, a portfolio assessment showing numbers of projects in gestation, planning, execution, testing, and so on, and time taken at each stage, will provide a useful view on what is going on and show any pinch points.

When

  • Project measurement shouldn’t just be left until the end. Progress against plan needs to be regularly tracked, to allow any corrective action to be taken. On completion, performance against requirement, time and budget should be reported; any deviation from requirement can be measured by assessing the number, or extent and seriousness, of deficiencies compared with requirement. A post-project assessment should be conducted some months after delivery, to check that required functions and benefits are being achieved.
  • A regular projects report should be sought, typically once a quarter, but more or less often depending on the volume of projects and their success rate.

Handling of other changes

What

  • Whereas any capable client will know what projects it has in the pipeline, in progress and delivered, and the planned and actual costs associated with each project, it may need to make a conscious effort to keep other service changes under control. The business should impose a strict cost ceiling on discretionary changes the provider can make without client approval. The client should request a regular report breaking down change costs per main system (application and infrastructure), to show where any stability problems are occurring.

When

  • A quarterly report will often suffice, but the frequency can be stepped up or down depending on service stability. If the business is worried about particular systems, it can ask for more frequent reports and more detailed information, for example, on the sources of change requirements (e.g. error rectification, user request).

If you run your own IT

If you run your own IT, you’ll need to take more interest in overseeing things that would otherwise be the provider’s responsibility. Staffing is one such area, with things such as headcount, the wage bill and the staff’s competence levels and development to be addressed. There are no absolutes to measure yourselves against, but you can join Special Interest Groups, enabling you to compare yourselves with others, and you can equally compare yourselves now with the way you were last year.

With an internal provider, the business will want to take a judicial interest in the aspects of IT management and measurement covered in our next chapter.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.224.135