CHAPTER 6: WHAT IT PROVIDERS NEED TO MEASURE

An IT provider is interested in whether:

  • the service is operating according to the contracts or SLAs it has with its clients;
  • it is satisfying customers and is thus well regarded; it has effective strategic-level engagement with clients;
  • it is perceived as offering value for money. More than that, it is actually operating efficiently and effectively;
  • incidents and problems are being managed effectively with prompt resolutions and good customer communications;
  • changes are handled well and introduced with either no or, at worst, few unwanted side effects;
  • ‘under the bonnet’ processes are performing as they should; we come back to this topic in the next chapter.

An enlightened provider will also be interested in whether:

  • it compares well with other providers;
  • it is well positioned to meet future client needs.

All these performance indicators will need to be continually assessed and action taken to deal with any shortfalls.

There will be many aspects of a provider’s own infrastructure and performance that contribute to an effective operation and so may need to be regularly assessed and tracked, for example:

  • staffing: wage bill, headcount, capability and training;
  • costs: in-depth breakdown showing operations and projects/changes separately;
  • infrastructure and applications’ stability: problem statistics, including incidences of recurrences;
  • project performance: the number of projects at each stage of the life cycle and statistics on status at completion (whether to requirement, time and budget);
  • incident/problem characteristics: severity and time to fix by system and by category, e.g. business continuity, security, capacity, testing and configuration;
  • change characteristics: priority (significance) and time-to-close figures by system;
  • handling of service transitions, e.g. success rates of introducing new software releases;
  • whether it is keeping up to date with technologies.

A senior manager in ICTPROVID comments:

I see you’ve had comments about us from the Corporation. They’re a pretty good bread-and-butter customer actually and we’re giving them a pretty good service, with the service stats proving it. The contract is just past the midpoint, so we might start to feel the pressure in the next year or two, as they begin working towards a new contract. It’s pretty likely the stakes will be higher next time round, with a group-wide contract the most likely choice. At the same time, they have a lot of commercially more savvy people nowadays, so they’ll be expecting good prices through the life of the contract.

We bid low for the present contract, but we’ve made money on changes, as you’d expect, and through the familiar let-and-forget culture, so what was good value for money three years ago is now pretty mediocre, which, in the short term, is good for us, of course, but bad for them.

We get a bit of hassle from the Corporation about service problems, but they really haven’t a leg to stand on. Our service is usually well within the contractual requirements, but their contract is so iffy that they didn’t make provision for stuff that their users regard as essential. We do what we can to help, but they aren’t prepared to dig into their pockets to amend the contract and we aren’t in a position to act as if we’re a charity.

We had a couple of really serious outages affecting one of their locations, which shouldn’t have happened. One of them was caused by a piece of ropey software dating from years back that we knew might play up, but you know what they say: if it ain’t broke, don’t fix it! The other one, which was far worse and took the best part of a day to fix, was to do with a hardware single point of failure; in other words, an accident waiting to happen. We’re having to pay a forfeit for the outage which is costing us more than preventing the problem would have done in the first place. That’s apart from the reputational damage: lesson learned!

An IT delivery director comments:

The Corporation take the biscuit. We’re providing a reliable service, with excellent service stats showing we’re performing comfortably better than contract. Yet they’re forever complaining about things they say they need that they didn’t bother to get covered in the contract. Imagine how my staff feel about this: undermined, undervalued and unjustly criticised.

Mind you, we did have a couple of spectacular own goals, which did our reputation no favours whatsoever. The worse of the two was an incident caused by a hardware fault that was reported at 7.25 am on a Wednesday and wasn’t fixed till 3 am the next morning. It would have been so easy to provide a hot standby, which I actually had asked for, but got turned down. I don’t want to say we’re skinflints, but we’ve ended up having to fork out more in compensation than it would have cost us to prevent the problem in the first place. The other incident lasted about six hours; it was caused by a problem with legacy software that would have been hard to prevent without replacing or re-engineering the application. So it probably was the right decision to leave it. However, we should have been more prepared to act quickly when things went wrong with it – a lesson I hope we’ve now learned.

Essential measures for providers: checklist

Some of the measures suggested here reflect those suggested for the client side. To ensure the service is under control, the provider will generally need to gauge things more frequently than the client.

Staff

What

  • The provider needs to keep an eye on its staff numbers and wage bill. If external resources are used, for example, temporary staff or consultants, their numbers and costs should be monitored.
  • Staff skills and competencies should be regularly assessed against those required, with action taken to deal with shortfalls. The average amount of training undertaken per person can be a useful indicator that staff members are being developed. However, having a target of a certain number of training days per person per year isn’t necessarily the best way of fostering staff development; it depends on what the staff need and that should be decided person by person.

When

  • Headcount and staff costs should be tracked frequently, say monthly, unless there are good reasons to the contrary. Competencies, training and development should be reviewed in accordance with the staff appraisal cycle.

Compliance with SLA or contract

What

  • The provider needs to ensure it complies with the provisions of the SLA or contract. Typically, the requirements will cover the things that are measured for or by the client, as discussed in the previous chapter.

When

  • Although the client may assess compliance with the SLA or contract as little as once a year, the provider will need to monitor this aspect of its performance much more frequently, say monthly. Of course, serious service problems have to be dealt with as they arise.

Customer satisfaction and client engagement

What

  • The provider needs to know how satisfied its clients and users are with the service and to understand areas of contentment and concern. Typically, as discussed in the previous chapter, it could use a survey with a five-point scale of: very satisfied, satisfied, neither satisfied nor dissatisfied, dissatisfied and very dissatisfied, supplemented by open questions on what is good and what needs to improve.
  • The provider will engage with the client’s management and users as a matter of course, e.g. through project and service desk contact. For an effective relationship, it is also important that there is systematic senior management contact between the two sides, to discuss the state of the service and the relationship, as well as plans and strategic direction. So the provider may want to keep count of how often these contacts take place.

When

  • An annual customer satisfaction survey should be enough for a provider that systematically engages with clients, but the frequency may need to be increased if the service becomes unstable, or if there are grounds for thinking that customer satisfaction is suboptimal.
  • Depending on how vital IT is for the business and the extent to which business transformation needs to be IT-driven, senior contacts should take place up to several times a year and stock should be taken of these contacts at least once a year.

Costs and charges: value for money

What

  • The provider will want to keep the costs that it incurs to a minimum, consistent with providing an effective service to its clients and with sustaining long-term relationships with them. By keeping the lid on costs, the provider helps contain the charges it needs to levy on clients, thus maintaining competitiveness and sustaining profitability. So the provider needs to keep measuring its costs, to keep them under control.
  • For service operations, a headline measure of the cost per user gives a helpful indication of cost efficiency. Cost per user would be assessed as: (total cost of live operation) ÷ (number of users). If need be, the headline figure can be supplemented by more specific information on components of the service provision, such as the cost of the service desk.
  • For projects and smaller-scale changes, the cost to the provider per project and per change should be accounted for, which will involve measurement, in some form, of staff deployment. A more detailed breakdown of costs per project phase may be needed, with reasons for any deviation from the project’s planned costs, to keep project costs under control.
  • The provider should understand the cost of dealing with incidents and problems. Even for the best-run services, some incidents and problems are inevitable, but providers will want to minimise their problems, prevent incident recurrence and contain costs associated with the incidents and problems that do occur. To understand what is going on, the provider will need to have a breakdown by problem category, such as capacity, configuration.

When

  • The finance department will normally lay down the timing of cost monitoring, which may well be monitored against budget every month.
  • Cost per user should be gauged regularly, say once a quarter, to ensure the provider is well placed to contain costs and present itself in the best possible light to the client.
  • Project and change costs need to be monitored against plans (budgets) in-flight, with regular reviews to take stock, say monthly to quarterly depending on circumstances.
  • Where the costs of handling incidents and problems warrants it, they should be regularly tracked and reviewed, say monthly to quarterly as for changes. If these costs are assessed per category, e.g. hardware fault, software release fault, it will help the provider understand where to concentrate efforts to stabilise the system.

Availability and other service problems

What

  • The provider needs to measure the availability of the infrastructure and key applications. Where several locations are served, it may be necessary to break the figures down by location. The basic statistic for availability is percentage uptime per period, but to get an assessment of service reliability, providers also need mean time between faults. From a problem management perspective, mean time to resolve is also important.
  • The provider needs statistics on service problems, including those that fall short of non-availability. It is helpful to have frequency and duration statistics both on incidents, which are service disruptions, and on problems, their underlying cause. An incident can be addressed by providing a workaround, whereas the underlying problem needs to be resolved to provide a proper solution and prevent a recurrence of the incident. IT providers normally classify incidents and problems by severity and have different time-to-resolve targets for each severity level; the incident resolution targets are likely to be included in client SLAs. It is helpful to break the statistics down by incident/problem category, e.g. hardware fault, release fault, capacity shortfall.

When

  • Statistics should be collected regularly, say monthly, to help the provider manage down problems. See the ‘Measures for the business: checklist’ in Chapter 5 for a discussion on measurement periods. Service incidents and problems do, of course, need to be managed in-flight.

Projects

What

  • Important end-project measures include cost versus budget (see above), time versus plan, and an assessment of the delivery to requirement. The incidence and significance of project changes, issues and problems should be tracked to facilitate the management of the project in hand and to help iron out the client’s and provider’s approach for the benefit of future projects.
  • Some form of post-project assessment of the quality of the project’s delivery to the live service is needed to help ensure issues, both with the delivery in question and with the approach to projects more generally, can be tackled.

When

  • The end-project measures need to be tracked and managed during the execution of each project. The end-project statistics also need to be regularly reviewed across the range of projects, the frequency depending on the organisation’s project load, say annually for an organisation running one or two projects and quarterly for one running several projects a year.
  • A post-project assessment is usually carried out two or three months after delivery. Stock should be taken of post-project assessments across the range of projects, to the same frequency as the end-project statistics above.

Service changes

What

  • Changes to IT services can range from the introduction of major new applications delivered by projects, to enable client-led business change, through the introduction of significant software or hardware updates, typically provider- or supplier-led, to smallish-scale application changes at user request. See ‘Projects’ (above) and ‘Transition management’ (below) for coverage specific to projects and significant updates.
  • The provider needs to understand the incidence of changes and their source (e.g. authorised project, supplier software update, client request), their cost (see ‘Costs and charges: value for money’ above) and, through the problem management system, whether their introduction leads to service faults or instability.

When

  • A regular review of planned changes and statistics on actual changes will show whether corrective action is needed to processes or software/hardware. Frequency could be quarterly to monthly, depending on the level of change being handled and whether there are any concerns about it.

Business continuity

What

  • Both the provider and its clients will need to have a means of sustaining at least the most vital services, should some calamity affect staff, premises, the IT itself or the environment. The client and the provider should both, therefore, have business continuity plans, approved by their respective senior management and tested. The provider’s business continuity plan should include a client-oriented IT continuity plan supporting the client’s business continuity plan. There should be a clear, agreed definition of what constitutes a business continuity incident (an event triggering the deployment of either organisation’s business continuity plan), so that statistics on the number and seriousness of such incidents can be tracked and lessons learned.

When

  • An annual review should be carried out of the existence, testing and currency of the business continuity plan and of the statistics on business continuity incidents. The frequency should be increased if the circumstances require it.

Infrastructure and applications

What

  • The provider needs to be on top of its hardware and software assets. All assets should be reviewed from time to time to check if they are still needed, up to date (current from the provider’s perspective), known to the provider (they have not been obtained ‘under the radar’) and licensed. Statistics should be maintained on problems affecting hardware and software, to enable weak spots to be identified and corrective action to be taken.

When

  • As a minimum, an inventory review should be carried out once a year. Hardware and software stability should be assessed in line with problem management systems, typically once a month (see above).

Suppliers

What

  • The provider needs to track its own suppliers’ performance and will have a particular interest in adherence to contract, prices/value-for-money, problem statistics and its own staff’s customer satisfaction with suppliers.

When

  • Supplier management statistics should be assembled regularly, e.g. quarterly or monthly and to align with liaison meetings. Satisfaction with suppliers should be assessed at least once a year.

Transition management

What

  • From time to time, providers introduce new software or hardware, whether to cater for a new version of an operating system, to upgrade the client staff’s laptops, to introduce an improved version of an application system, or for some other reason. The provider needs statistics to indicate how well these new releases are planned, communicated/consulted on, tested and rolled out, and how well they perform in practice. The required statistics should show up in service transition incidents through the problem management system (except performance in practice, which will be manifest in service operation incidents).

When

  • As ever, incidents need to be managed in-flight. Statistics on transition incidents need to be reviewed for lessons learned post-transition, say from one to three months after the new release goes live. Problem management statistics should be reviewed for release-related problems in the operational system, at the same time.

Technology

What

  • The provider and, in many cases, the client will want reassurance that the provider’s approach to technology change is strategically acceptable; as an example, this may mean not being too leading edge on things like operating systems but taking full advantage of things like the Cloud. There are no magic measurements for this, but the provider should ask itself the question regularly, with clients where appropriate, of whether it is as up to date with technology as it needs to be.

When

  • Some form of annual technology review would be prudent.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.188.64.66