5

KEY ACTIVITIES ASSOCIATED WITH THE SERVICE OPERATION STAGE OF THE LIFE CYCLE

The information in this chapter describes the main activities of the SLM during the service operation phase of the service life cycle.

REPORTING PERFORMANCE

As already mentioned, it is obligatory that any commitments contained in an SLA must implicitly be measured and reported back to the customer. The ability to measure performance is a responsibility within the service design stage of the life cycle, but the reporting itself is a function of the service operation stage.

There are two leading practices to note in terms of reporting, the first of which we have already referenced: it is not part of your role as SLM to measure the performance achieved; rather it is the responsibility of the teams undertaking the actual activities. Your role is to collate these performance statistics, publish them and discuss them in the customer review meetings. The second practice is that reports should ideally comprise three key components: charts, values and narrative. Charts are designed to show trends and can also clearly indicate when a service level is missed. Values are necessary to satisfy those who are wary of charts and prefer definitive numbers. The narrative is the most important aspect of any report. The basic premise is that a report should be capable of being read without explanation. Therefore, the narrative should comment on and provide the underlying reasons for any exceptions or trends and any missed service levels.

With regard to the first practice, while it may be tempting to be in control of the end-to-end process of performance reporting, if you attempt to gather the actual performance statistics from the various operational teams responsible for delivering the service, this will consume a huge part of your time doing something for which you should not be responsible. It will also appear to absolve the operational teams from an important part of their responsibilities, since each team should be responsible for managing, reporting and improving the quality of their own work.

To provide the narrative, you may need to discuss the performance statistics with the operational team(s) that produced them in order to be able to brief your customer at the service review meetings on the reason(s) for a service level being missed and the actions being taken to prevent a further breach. Trends may be indicative of longer-term changes that you and your customers will also want to understand.

A simple guide is to consider the perspective of your customer: what would they want to know in terms of the service quality and consistency? In essence, your customers want confidence that their service provider is in control of the service, that the service is being delivered in a way consistent with its use, taking into account future requirements and that you are actively looking at improvements on an ongoing basis.

We look in more detail at service reporting in ‘Measuring and reporting service performance’ below.

MANAGING CHANGES TO SERVICES AND SERVICE LEVELS

Experience suggests that no sooner have you agreed the service levels and signed off the SLA than the customer’s requirements change. ‘Change is the only constant,’ wrote Heraclitus of Ephesus (535–475 BC); so you will need to routinely and regularly validate your customer’s requirements and be prepared to manage requests for changes to services and service levels.

The service review meetings you have with your customers are typically a source of new and changed requirements, but changes can and should also be driven proactively from within IT, often through CSI. They can also come from external sources such as legislative and regulatory changes.

It may be that the change is mandatory, but even then there’s usually more than one way to implement it. The point is that, similarly to establishing the service levels in the first place, the procedure may be iterative before the agreed solution is deployed. At this point, though, the management of the change is likely to fall under the control of the change management process to ensure that all potential interfaces and dependencies are appropriately considered.

Once agreed and introduced, any such changes should be reflected within the service catalogue. As SLM your role is to advise the service catalogue manager of such changes. The changes also need to be reflected in the performance management system(s) and service level reporting, which is your responsibility.

LIAISING WITH THE BUSINESS RELATIONSHIP MANAGER(S)

When services are live and if your organisation has both the SLM role and the BRM role, then it is clearly important to ensure timely and effective communication between the two, and to avoid duplication. The main separation of the two roles is by stakeholder level, which is to say that the people with whom each role engages are usually different. Communication is vital to ensure that both roles have access to the same information flowing both in and out of IT and share their respective customer insights. One common way to manage this is through a customer relationship management system whereby either role can update the system for the benefit and information of the other.

REVIEWING AND MANAGING EXISTING SERVICE LEVEL AGREEMENTS

During your regular customer reviews (see ‘Managing customer reviews’ below), it is useful to have an agenda item that requires both parties to validate the continued relevance, accuracy and applicability of the SLA. Regardless, good practice requires you to formally review and validate the content of the SLAs at least once a year if a review has not been triggered in the meantime. If you do not do this, there is a danger that the SLAs become shelfware and fall into disuse as they become increasingly irrelevant. More importantly, there is an increasing likelihood that the services IT provides and the business requirements gradually go out of alignment and IT loses the trust of its customers. Therefore, an annual review should be a key requirement in the SLA itself.

If a review triggers a change to the SLA, this should again be formally managed through change management since the SLA is a version-controlled document and configuration item.

MANAGING SERVICE IMPROVEMENT PLANS/PROGRAMMES

We have already mentioned that one of the often forgotten responsibilities of IT is to help its customers make the most effective use of their services. IT should therefore always be on the lookout for potential improvements, and the mechanism for managing these is referred to as the service improvement plan or programme (SIP).4

There will be occasions when, as SLM, you instigate a SIP in response to a customer need to change one or more current service levels. This may be reactive, in response to a missed service level, or proactive, for instance to improve competitiveness; but it may also be triggered by IT when the department identifies an opportunity for improvement. Occasionally, customers may propose a reduction in service levels if there is a price advantage. In any of these cases, the SIP is an effective way to manage such changes.

As an SLM, you are likely to own the SIP, in which case it will be your responsibility to recognise the need for one, gather the requirements, build the business case for change, coordinate the resources necessary, track the benefits and ensure that a post-implementation review is conducted.

Service improvement in a managed service environment

A common characteristic of a managed service environment is that the service provided is the minimum necessary to meet the agreed service levels. Yet those engaging the managed service provider frequently talk about a ‘partnership’ with their service provider.

There is no agreed definition of a partnership, but clearly it should involve more than the basic provision of a service consistent with agreed service levels. There are, though, aspects of a managed service that can move it towards a partnership, including the following.

Operating an open-book accounting approach whereby the customer and service provider agree a level of profit that can be built into the service, and the customer has the right to inspect the accounts of the service provider to validate the level of profit.

Having common or aligned strategies, for instance, in terms of growth, geography and technology areas.

Having a similar size and culture in each organisation. If the managed service provider is an order of magnitude larger than the customer, then the focus on each customer is likely to be significantly less than it would be if they were more closely sized. Culture is harder to characterise, but it’s easy to see cultural misalignments.

Operating with shared risk/reward. A service provider is simply not going to offer improvements in performance or price unilaterally unless they have some incentive so to do. This means structuring the contract in a way that encourages innovation by the managed service provider for a share in the benefits.

As mentioned in the introduction to this book, in a managed service environment the role of SLM might be referred to as SDM or account manager. In this situation, and in a true partnership, the responsibility for promoting proactive improvements will therefore be the responsibility of the SDM or the equivalent role within the managed service provider.

PROACTIVE MANAGEMENT AND PREVENTION OF SERVICE RISKS

One of the characteristics that distinguishes more mature IT departments is the extent to which they operate proactively. One of the ways to do so is to help identify and proactively manage potential risks to customer services. In this respect, you are likely to be interfacing with your service owner/manager colleagues. The value of your role is to recognise the importance of key services and the impact on the business of their loss or deterioration.

Similarly, you will be working together with process owner/manager colleagues in problem management, availability management and capacity management, as well as with those responsible for CSI, to find proactive ways to improve the processes and procedures that underpin service delivery by providing control and consistency.

The challenge in less mature organisations is one of committing resources to proactive improvement. IT departments that work predominantly reactively will always be complaining about having insufficient time/people/money to be able to proactively improve services. Another characteristic of the reactive IT department is the ‘hero culture’. In this environment, those who react to incidents by providing support during the small hours or at weekends are recognised and rewarded for it while those who spend time and effort managing projects and changes well enough to prevent incidents from happening in the first place are considered to be simply doing their job.

In this case, a cultural change is needed, and one way to start changing a reactive culture is to make the costs of this approach more visible. There are four costs associated with incidents but, perhaps surprisingly, these may not even be measured, let alone reported or used as the basis of proactive management.

The first and lowest cost is the internal rework necessary to deal with incidents and problems. This is represented by the time spent by anyone in second and subsequent line support handling incidents or problems assigned to them. When we ask about this, the most common response is in the 80–85 per cent bracket, and it is rare that we hear of values less than 30 per cent.

To calculate this reactive cost, apply this simple equation:

Average % of time spent on incidents/problems × number
of people in second + line support × average fully loaded
employment cost

This can be a surprisingly large figure, running into hundreds of thousands of pounds a year for a medium to large IT department.

The second lowest cost is the cost of the lost user time resulting from service unavailability. To calculate this cost, the service desk needs to record the number of users affected by incidents. This is not necessary for all incidents, as incident priority in most IT departments is a factor of the number of users affected, such that only the highest two levels of priority affect multiple users. The calculation of lost user time is therefore:

Number of incidents × number of users affected × duration ×
average fully loaded user employment cost

You will need to take a view on the extent to which users are actually affected since, in some cases, the loss of a service may not prevent a user continuing to work or only partially affect their productivity. Nonetheless, this figure can run into millions of pounds per year for a medium to large organisation.

The third lowest (or second highest) cost is the impact of lost user time, or the opportunity cost. This is the commercial impact of users being unable to work or being partially affected by IT incidents. Implicitly, this is a higher cost than the lost user time itself since, at least in a commercial organisation, users would not be employed to generate less revenue than they cost. This is harder to measure but should be attempted, as it can be huge.

Finally, the most significant cost of IT incidents is the reputational damage to the company. Two obvious examples in recent times include the incidents suffered by Royal Bank of Scotland and Blackberry, both of which resulted in massive reputational damage, the latter unfortunately coinciding with the launch of the Apple iPhone.

Identifying and measuring the cost of incidents is one of the most effective ways to justify investing in proactive, preventative measures that will pay for themselves many times over. As SLM, you are in a position to help collect the relevant information and promote such an approach.

VALIDATING SUPPLIER CONTRACTS FOR CONTINUED ALIGNMENT WITH BUSINESS REQUIREMENTS

While contracts with suppliers are ideally established in the service design phase of the service life cycle, the alignment between these contracts and customer SLAs is an ongoing requirement in service operation. Validation of the alignment could usefully be done annually as part of the regular service level review process. It could also be triggered by the take-on of a new supplier, the renegotiation of a contract with an existing supplier or the establishment of a new or changed service level.

In all of these cases, you will have a key role to play together with the supplier manager, since you share the responsibility for ensuring that contracts with third parties align with and support the SLAs you have negotiated with your customers.

PROVIDING A POINT OF CUSTOMER CONTACT

On a day-to-day basis, you are one of the key interfaces between the business community and the IT department. Part of your role is therefore to make yourself available as a point of contact for the customers you represent and your colleagues in IT. This is not simply a reactive role but it also allows you to proactively provide relevant information, for instance about incidents, the early closure of a service or system and upcoming changes and their impact. The more active you are in this role, the more likely you are, in turn, to be kept informed, creating a virtuous circle.

With regard to incidents, it is important that you ensure that in the event of a major incident affecting any of the services or customers for which you are responsible, the service desk and/or incident manager informs you of the circumstances, the impact and what is being done to manage the incident, so that you can convey this to your customers.

MANAGING CUSTOMER REVIEW MEETINGS

This is one of the core activities of your role. Indeed, much of your time will either be spent at or preparing for these meetings. The importance of these meetings is hard to overstate. For your opposite number, these meetings are their primary interface with IT and will therefore form the basis of their relationship with IT and their opinion of IT.

The frequency of meetings is entirely at the discretion of you and your customers. Probably the most common frequency is monthly, but for more important services and customers there might be a weekly update or briefing, perhaps on a conference call where you might be joined by key people such as the incident and problem managers. If the interval between meetings is too long, say, six months, there is a risk that SLAs will become shelfware, and any relationships you have cultivated are likely to deteriorate.

As SLM, you are an IT ambassador, and to your customers you are representing IT. The key impression you should cultivate for these relationships is one of professionalism. This involves everything from attending the meetings on time, looking the part and being fully prepared to become a trusted partner by making and keeping appropriate commitments.

It is vital to understand the level of authority you have in the role. You need to be able to influence the provision of IT services around the SLAs and you need to have the authority to speak with your counterparts on behalf of IT. Without this authority, your customers will bypass you to get things done.

Having said that, a key mistake to avoid is overstretching your authority. The confidence and trust that your customers have in you and IT is easily destroyed by making commitments or promises that are outside the scope of your authority and that you may therefore not be able to deliver. Once trust has been lost in a relationship it is very hard, and takes a long time, to restore.

Planning and preparation

(See also Chapter 11 for additional information on customer review meetings.)

The service review meetings are a key interface between the IT department and the business, and in fulfilling the role of SLM, you are managing this interface on behalf of both parties.

In terms of preparation, there is nothing truer than the old saying, ‘If you fail to plan, you plan to fail.’ Planning is the key to a successful meeting. This means asking your customer what items they’d like to include on the agenda and agreeing it in good time beforehand. Ensure you review the minutes of the previous meeting, recognise any action points assigned to you and be prepared to respond to these, either by demonstrating that you have completed them or by describing progress with good reasons why you have yet to complete any actions that are beyond the agreed date.

A draft agenda is offered below.

1. Minutes of previous meeting and review of outstanding actions

2. Review of performance achieved against SLAs since last meeting

3. Explanation of missed service levels and the causes, performance variations and preventative actions

4. Review of business changes and plans for the next 3–12 months

5. Review of IT plans for the next 3–12 months

6. Agreement of actions

7. Any other business

8. Date of next meeting

The core of your discussions will focus on the performance of the services provided, tasks and actions undertaken by IT since your previous meeting, and, in particular, where any of these have failed to meet the service level or the reasonable expectation of your customer. Therefore, you essentially need two pieces of information at your finger tips: statistics and measurements demonstrating the extent to which IT met the service levels and expectations; and explanations for any situation that caused a service level or expectation to fail, ideally together with the action IT is taking to prevent any recurrence.

The basic principle is that any service levels and commitments made by IT to a customer have to be measured and reported. If you think about it, this is obvious, since making a commitment that can’t be measured is somewhat pointless.

We will look at measuring and reporting performance shortly.

After the meeting

Following the meeting, it is good practice to send the minutes to the customer representative(s) with whom you met, and to anyone else you agreed should receive a copy. This might include, for instance, anyone referenced in the meeting but not present, anyone with an action, and your CSI manager, since there may be reference to a potential improvement.

As the IT representative, you have the responsibility of following up on your customer’s behalf any agreed IT actions and progressing these with the relevant people within IT. My advice would be to maintain a single point of contact wherever appropriate and, rather than expecting your colleagues to follow up with your customer, have your colleagues update you so that you, in turn, can update your customer.

MEASURING AND REPORTING SERVICE PERFORMANCE

While all SLA commitments have to be measurable and have to be reported, it should not be your responsibility as the SLM to measure the activities to which IT is committing but the functional managers responsible for the operational activities. We referenced this in ‘Reporting performance’ above. This can be one of the biggest challenges you face in your role as SLM; first, because other managers may see it as your role, and second, because even if they recognise it as their responsibility, it still means that you are reliant on their activities to fulfil yours. The simple fact is that each function has responsibility not only for fulfilling their operational objectives but also for measuring them as the basis of effective management and CSI.

Your role is to gather and collate these operational measures into a consolidated report for your customer(s). This is not always as simple as it sounds. For instance, you will want a report of incidents by customer and/or service, yet the statistics provided by the incident management team may be produced and sequenced by priority or category. Furthermore, incident reporting might be spread across the incident management teams with no overall coordination.

Clearly, a good tool can facilitate reporting. Ideally, there is ‘One source of the truth and multiple views.’ In other words, the source of all statistics and reports should be one place, and the obvious place is the service management toolset. Furthermore, the more automation you can introduce to the reporting, the easier and quicker it is to produce reports, the greater the analysis potential, and there are likely to be fewer errors. However, toolset configuration is another dependency for you as SLM, as it is unlikely to be your responsibility or that you will have the necessary skills. Please see Chapter 7 for more information on tools.

If you are reliant to any extent on colleagues to produce information, you may obviously meet some resistance in obtaining that information since you have no line authority over them, and they will be under their own time pressures. There is no simple solution to this; essentially the entire IT team needs to buy into this way of working and accept their shared responsibilities.

Ideally, the information sources related to measuring performance are identified before the SLA is signed or the commitment made. Nonetheless, the challenge many organisations face is that performance is often not reported at a customer or business unit level.

The ideal reports comprise a blend of charts, tables and narrative. Charts are ideal for showing trends and allowing the reader to determine whether performance is improving, deteriorating or staying the same. However, there are some people, including those who work in finance, who have a strong scepticism of graphs, not least because they are often distorted to highlight variances. One example of this is a chart where the y-axis does not start at zero and which may therefore misrepresent the data.

To avoid any potential misunderstanding, charts should be accompanied by the same data in tabular form to allow the reader to make their own interpretation from objective data.

Generally speaking, by far the most useful aspect of a report is the narrative, since this provides the interpretation of the data and charts for the reader’s benefit. The narrative allows the reader to understand the key messages of the report quickly and without ambiguity. Furthermore, each reader receives the same message.

Producing the narrative is your responsibility, although the key elements of it will hopefully have come from teams such as incident management, problem management and availability management. Essentially, any variance from normal should have an explanation. This might be a missed service level or simply an unusual value outside the normal range of values.

In many cases, you will not know the cause of the variation but will need to gather information from the relevant technical teams or the incident log. Once you have determined the cause, and if this is something undesirable, such as an incident, you should also understand what (if anything) is being done to prevent further recurrences. All of this information is of potential value to the reader and should therefore be included in the report, using appropriate language.

Another potential source of information is the service owners; they can contribute meaningful information from a service perspective.

The data in a typical report might look similar to Figure 5.1, a real, but anonymised, example to which the charts and narrative should be added.

Other activities you might wish to add to the report for your readers’ benefit could include:

Figure 5.1 Sample weekly incident, problem and change activity report

the change schedule;

the projected service outage;

planned maintenance periods;

the schedule of continuity/disaster recovery tests.

Once you have defined a report structure, ideally this should be maintained from week to week or month to month so that the recipients will become familiar with it and find it easier to navigate through the report.

DATA GRANULARITY

When compiling performance reports, you need to recognise that the longer the measurement interval, the easier it is to achieve a defined level of performance. For instance, if the performance measure is ‘Percentage of incidents resolved within service level, by priority’, then this is easier to achieve measured over a monthly interval than a weekly interval.

USING MEANINGFUL MEASUREMENTS

One of IT’s classic mistakes is to use measurements more relevant from an internal, technical perspective than from a customer perspective. Examples include the following.

Defining availability in terms of percentage: typically, we include in an SLA a figure such as ‘99 per cent availability of the service’. To a customer, a far more meaningful measure is the number of minutes’ downtime per week that this level of availability actually means.

‘95 per cent of priority 1 incidents will be fixed within 4 hours’: while on the face of it this sounds reasonable, when there are only three or four priority 1 incidents in a month, what possible meaning can 95 per cent of these have?

‘Transaction response time will be less than one second measured at the host’: making a commitment to a customer about response time at the host is irrelevant. If the network delay adds another second to the response time, the sender will perceive a response time of up to two seconds and have no control over or recognition of the measure from the service providers’ perspective.

Figure 5.2 Sample chart showing the number of respondents who selected each option

Another classic mistake in the use of measurements is to average customer satisfaction values by summing up the total values and dividing by the number of respondents. It is almost impossible to influence this figure. Far better is to report the number or percentage of respondents who gave each of the possible scores, as shown in Figure 5.2.

As ever, the trick is to think about performance from your customer’s or stakeholders’ perspective.

MANAGING REQUESTS FOR CHANGE

There will be occasions when your customer requests some form of change or variation to the service, perhaps on a temporary basis, but occasionally on a permanent basis. It is unlikely that you will have sufficient authority to make commitments to your customers on behalf of IT, so a key part of your role is to listen and understand not only the request but also the justification for and reasoning behind the request. From this information, you can help your customer build a business case for the change, if this is needed.

There are a number of processes and therefore, by implication, a number of roles with which you might interface in respect of a customer change request. Some of these will be influential in approving or managing the change, others may need to be kept informed. These roles may include:

the service portfolio manager;

the BRM;

the CSI (or quality) manager;

the service owner/manager;

the business analyst.

Your role in supporting the change request is to be your customer’s single point of contact for it. This means gathering, understanding and conveying the request and keeping the customer informed of progress. In this situation, you are the ‘voice of the customer’ within the IT department, a pivotal role in ensuring requests are appropriately managed.

ACTING AS AN OUTBOUND COMMUNICATION CHANNEL

Not only does your role provide a voice for your customers within IT, it also provides a voice for IT out to the business. You therefore represent a channel for information to flow both ways, but yours is not the only channel. Other channels or interfaces might include those from:

business relationship management;

capacity management;

financial management;

the service desk;

desk-side engineers/on-site technical support.

In addition, there are electronic communication channels, such as the intranet, the self-help portal, incident records and perhaps even newsletters.

A key requirement for supporting a professional approach to relationship management is ensuring that communications between IT as the service provider and business stakeholders are accurate, timely, meaningful and consistent. The list above shows that there are likely to be a multitude of interfaces and communication channels; ideally the service provider should manage these through a communication plan. It would be entirely appropriate if this plan were part of your responsibility. Alternatively, if it is owned by another role, it is essential that you are aware of and contribute to the plan as a key stakeholder.

The service provider’s communication plan helps to ensure consistent and timely communication both within the IT department and between IT and its stakeholders. As such, it is likely to be referenced within OLAs and third party contracts, the former falling within your remit.

4 The term ‘service improvement plan’ tends to be used to manage a one-off change for a specific purpose, whereas the term ‘service improvement programme’ tends to be used for an ongoing programme of improvements into which specific items are introduced as and when required. Confusingly, both are abbreviated to ‘SIP’.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.254.44