CHAPTER 8

Images

Measuring Services

Measurement for the Common Good

Establishing a Truth with Management

Apples and Oranges—Define Baseline Data before Design and Launch

Making the Case for Return on Investment

Using the Service Blueprint to Model Measurement

Money Talks

Avoiding Common Mistakes When Measuring Services

Measurement Frameworks

SERVQUAL and RATER

The Triple Bottom Line

Summary

Service designers and service providers both have a need to prove that design provides a return on investment. Results can be measured in terms of money made or saved, in an improved customer or user experience, value created to society, or reduced drain on the environment.

We have not found a single, perfect method of measurement that provides robust evidence for the value of service design. However, it is important to define some measurement criteria before a new design is launched and to track these parameters to prove value and improve the service. You will do yourself and the field of service design a great favor if you always include the definition of performance indicators in your proposals. Exactly what to measure is likely to vary with each new service design.

In the industrial age, leaders like Henry Ford and General Motors’ Alfred Sloan developed science-based corporate measurement systems to enable them to “manage by the numbers.” It soon became apparent, however, that this approach lacked the systems approach needed to ensure quality and improve performance on a continual basis.1

After World War II, W. Edwards Deming, an American statistician and management guru, pioneered a more systems-based approach that first proved its effectiveness in the Japanese automotive industry. In the 1990s this came to be known as “lean” enterprise or production—a focus on removing every tiny inefficiency from manufacturing processes. Many enterprises in the service sector use “lean” as an approach to improve their services, but this method flourished in the industrial tradition, and although it may make service delivery more efficient, it rarely improves the customer experience. Ironically, Deming himself argued that a myopic focus on efficiency and the elimination of product defects was not enough. Instead, companies should try to predict the underlying customer needs and think about what products or services will be required 5 to 10 years from the present and innovate for that future.2 Measuring efficiencies in production makes sense from an industrial point of view, although the sustainability agenda requires companies to consider the full life cycle of products. But for services, what must be measured is consumption—the experiences of the service provider’s agents and users.3

When you base measurement on the problems and successes people have when they use a service, you are better positioned to streamline delivery while improving the customer experience. Efficiency and experience are rarely contradictory forces, as long as you use the customer experience as a baseline for measurement, because improved efficiency usually goes hand in hand with happier customers.

As an example, insurance customers will tell you that they highly value quick payment when they make a claim. When insurers meet this customer need, they find that they save processing time because customers do not require as many interactions. They also find that people are likely to submit lower claims. The quicker an insurance company settles a claim after a burglary, the fewer CDs people seem to find missing. Happy corporation, happy customers.

Another advantage of measuring from the outside in is that it enables companies to compare themselves to the competition in a more accurate way. They will not be able to get call center response time numbers from their competition, but it is as easy to speak with a competitor’s customers as with their own. Do their competitor’s customers get through on the phone more quickly than their own customers?

Starting with the customer makes as much sense for measurement as it does for design.

Measurement for the Common Good

Measurement is traditionally seen as a way for managers to control and plan their businesses better. Over the last decade, however, digital systems have made data radically cheaper to harvest and more accessible to managers, frontline staff, and customers. This democratization of data means that the purpose of measurement is shifting away from simply providing management tools. Measurement becomes a way to engage managers, frontline staff, and customers in collaborative service improvement. When measurement becomes transparent, there is an opportunity to make improvement a common cause, not a driver of adversity.

Good feedback channels for customers enable them to tell service providers about problems and opportunities. Customer ratings and purchasing patterns enable customers to make better choices based on other people’s experiences. The same measures set a standard for both managers and staff to live up to. When this is bolstered with internal business data, managers and staff can work better together to meet customer needs and drive efficiency.

Before we get into which data are useful to measure, it is worth emphasizing that the act of measuring in itself is as important as what you measure. Most managers will say that when something is measured it will improve, and people are likely to trust evidence from fellow consumers when making choices (Figure 8.1). Ultimately, what is measured should be driven by what is most likely to create a shared culture of improvement within the organization. This is what creates valuable, long-term relationships with customers and enables sustainable growth.

Images

FIGURE 8.1
The Android Market interface includes several examples of how measurement has become an integrated part of the purchasing experience, providing live data both to the provider and to customers.

Establishing a Truth with Management

Top management buy-in is vital to any design effort, and the same goes for measurement. If leaders do not see the strategic reasoning behind measuring something, they are not likely to take the results seriously and act on them. This is particularly true of service design projects because the process often involves a change in cultural mindset within an organization.

One would expect CEOs to base their goals on hard numbers, but when it comes to championing customer experience, committed managers act as much on their own reasoning as on the output of their spreadsheets.

Therefore, designers need to help managers come to their own strong conclusions about what will make their services succeed. Designers can often be very vocal about stepping into the shoes of others when it comes to the end user, but their empathy evaporates when dealing with their clients. CEOs are people, too, with their own needs, motivations, and behaviors, and designers can help them by uncovering and communicating self-evident truths that are simple for CEOs to share with their organizations. Some examples of this are:

  • “We are the leader in our sector and can only grow by increasing our margins. Therefore we need to offer the best customer experience.”
  • “We are a new entry in the market, and need to give customers more for less. Therefore we need to offer the best customer experience.”
  • “The market is saturated and we need to focus on retaining customers, not acquiring new ones. Therefore we need to offer the best customer experience.”
  • “Our success depends on acquiring critical mass quickly. Therefore we need to offer the best customer experience.”
  • “Developing new features is expensive and doesn’t provide us with significant competitive advantage. Therefore we need to offer the best customer experience.”
  • “Customers don’t understand how incredibly smart our technology is. Therefore we need to offer the best customer experience.”
  • “The patients’ experience is as important to their well-being as the clinical outcome. Therefore we need to offer the best patient experience.”
  • “As a monopolist in our market, our customers love to hate us. Therefore we need to offer the best customer experience.”
  • “The reputation of our products suffers from our neglected customer service. Therefore we need to offer the best customer experience.”

This list can go on and applies not only to customer-provider relationships, of course, but also to public service organizations. The point is that managers need simple and logical reasons for investing in service design, which means designers need to support those decisions by measurement.

The most straightforward approach is to establish as a truth that the service experience is crucial to success and then measure how it improves. If the logic is right, the bottom-line numbers will work themselves out when the customer experience improves.

We Do That Already, Don’t We?

Apples and Oranges—Define Baseline Data before Design and Launch

To focus the work and make everyone involved more accountable, define what you want to measure when you start the design work. Goals may change along the way as innovative solutions emerge, strategies change, or the competition develops in unexpected ways, but when you approach launch it is essential to establish baseline data to measure against. You need the before numbers to prove the success of after.

As most designers have experienced, it is difficult to get the numbers that prove pure commercial results from the design input, unless you work in packaging design. Too many factors influence the outcome to attribute it to design alone, but nothing beats cash as evidence of success. Your chances of producing hard economic facts about the effects of the design work will be much higher if you define what you will measure at the start of a project.

Making the Case for Return on Investment

Service providers often struggle to understand the potential return on investment for service design. The key to making the business case for service design is to focus on how you want the work to change customer behavior, and then estimate the potential impact on the business in numbers.

One example of this approach is a process for proving the business value of improved customer experience published by Forrester Research.5 This model includes the costs and potential benefits of improvements in particular industry sectors, based on changed behavior.

Some typical behaviors often addressed in service design projects can be translated to results on the bottom line:

  • New sales: increased acquisition of new customers
  • Longer use: increased loyalty and retention of customers
  • More use: increase in revenue for every customer
  • More sales: increased sales of other services from the same provider
  • More self-service: reduced costs
  • Better delivery processes: reduced costs
  • Better quality: increased value for money and competitiveness

When planning a service design project, a smart strategy is to establish key goals right away and assign concrete targets to these, be they commercial, social, or environmental. This will help get managers on board, justify the investment, and provide direction for the design work itself. It also means you will be measured against what you intend to influence, not what someone else decided they could or should measure. You need to make sure you are comparing apples to apples.

When you have determined which behaviors and experiences you want to measure, you need to establish how you can track these to get results that help everyone continue to learn and improve.

Although the list above includes the word “customer,” this measurement strategy is as true for nonprofit and public-service projects as it is for commercial ventures. In fact, making the case for investment in design applies even more when public money is involved, especially in times of austerity. As designers, we should be using our empathetic skills to understand what our clients’ needs and motivations are, just as much as those of our end users.

Using the Service Blueprint to Model Measurement

What are truly service-native ways to model and measure the value of design? A useful way to approach measurement of services is to return to the service blueprint. The service blueprint has already been used to capture important moments of user interaction with the service, so you already know what you are trying to influence with your design and thus what you want to measure. Therefore, you can use the blueprint not only to plan and design a service, but also as an operational tool to analyze where costs and revenue occur and how they affect the service experience as a whole. The service blueprint can tie together the hard business metrics with the “soft” experience aspects of a service and can ensure that everyone—management, staff, and the design team—is on the same page. You will see below that we look for metrics to measure across time and touchpoint channels just as we did when thinking about the design of the service experience and proposition.

Money Talks

Regardless of our view on design’s value to people’s lives and experiences, to argue for service design as a business-critical activity, we need simple and useful models that show how revenue flows in a system and how this is directly influenced by design decisions.

Two defining characteristics of service delivery provide a framework for integrating business modeling with the design processes:

  1. Services must adapt to people’s changing needs over time.
  2. People interact with services across multiple touchpoints.

Transformed to points of cost and revenue, these characteristics provide us with a truly service-native way to model a business case and measure the results:

  1. Cost and revenue through the customer journey: By breaking down the business model across stages of a customer journey, it is possible to model where costs can be reduced and revenues can be generated in relation to where value is created for the customer.
  2. Cost and revenue across touchpoints: By breaking down the business model across touchpoints, it is possible to model in which channels costs can be reduced and revenues can be made while creating value for the customer.

Using the service blueprint to zoom in to the economics of a single interaction with the customer or out to the big picture of the aggregate economics of the service enables managers to prioritize which interactions to invest in, and to analyze whether the whole service proposition will provide a return on investment (Figure 8.2).

Images

FIGURE 8.2
The service blueprint transformed into business case. Instead of describing design detail, we use the same framework to calculate costs and revenue through the whole customer journey. This tool helps to make the case for investment in creating a great experience in an interaction when the revenue may be regained in another channel or at a later stage.

Avoiding Common Mistakes When Measuring Services

Measure experiences over Time

The first mistake organizations typically make when measuring the service experience is to speak with customers or users only once. However, measuring people’s experience at different stages of their journey is crucial. Their understanding and expectations will be hugely different when they are new to a service compared to when they have used it for a while.

For example, when people are first admitted to a hospital with a cancer diagnosis, they lack the competence to understand why doctors suggest one treatment over another. Patients need simple facts to cope, and their questions are along the lines of “What are my chances of survival?” After three months of chemotherapy, however, patients may have enough competence to verify that nurses are administering the right dosage, and they can understand enough medical terminology to read the same journal articles as their physicians.

What needs to be measured is whether a service meets people’s expectations at different stages. If people have a great experience when they are first sold the service, does it live up to expectations in everyday use? Does a service that is simple to start with give people a greater depth of experience when they gain competency in using it? When do people consider changing providers? How difficult is it for them to leave a service?

To figure this out, you can stay with individual customers and measure their experience over time, or you can engage with several customers at different stages to understand how their expectations and fulfilment change. Most companies want to be great at both acquiring and retaining customers. Measuring along different stages of the customer journey will enable them to do both better. Increased revenue and higher margins should follow naturally.

Measure across Touchpoints

The second common mistake organizations make is to speak with customers who have used only a single service channel. This is fine if you need to understand the quality of a single touchpoint (“What do you think about our website?”), but it will not give you any valuable data about the quality of the service experience as a whole.

Therefore, you need to measure people’s experience as they move between touchpoints because this reveals the relationship between expectations and experiences (“Your website was great, but when I tried to speak to a real person I was seriously disappointed. I’m switching to another bank.”).

The key thing to learn from measuring across touchpoints is to understand which channels set customer expectations too high to fulfil in the next interaction, and which perform too badly to keep up with the rest of the experience. It is these outliers that destroy the service experience, not the touchpoints that perform to expectation every time.

Share Customer Satisfaction Measurements with Staff

Most corporate employees are used to being measured on key performance indicators that form part of their appraisals and salary negotiations. Usually these are geared to help managers have conversations with staff about productivity and efficiency. Another common mistake, however, is to stick to this top-down view of measurement.

Sharing customer satisfaction data with staff on an ongoing basis can be valuable. Some organizations measure satisfaction after every interaction customers have with staff and report these data back to the individual staff members. Employees can see how well they are measuring up to customers’ expectations. Colleagues can compare their performance with others in their units, and can see how their unit performs compared with others.

At first, this may seem like a risky proposition, but it turns out that this feedback is inspiring to staff. It gives them tools to focus on beyond productivity, and lets them engage with the quality that they deliver. It helps staff to highlight system problems that prevent them from providing good service, and provides a sound basis for conversations with colleagues about improving the customer experience.

Ultimately, good customer service is what frontline staff really care about. Most of them are in their positions because they enjoy speaking with people and helping them get things done with a smile on their faces. When you measure service experiences, it pays off if you make the data as transparent as possible to everyone involved and format it to enable collaboration that fosters continual improvement.

Research shows that customer satisfaction scores have a direct relation to a customer’s propensity to buy a service and to remain loyal to a provider.

Measurement Frameworks

Net Promoter Score

Within a broad field of methods for measuring satisfaction, one popular framework is the net promoter score. The simplicity of the method is its great advantage—customers are simply asked “How likely is it that you would recommend our company to a friend or colleague?” This type of survey is relatively simple to conduct, and it is constant across companies and industries. This makes it easy for companies to compare their performance with the competition, and good net promoter scores have been documented to relate directly to business growth.

This makes the net promoter score useful in terms of measuring the impact of a new service design. The disadvantage with the method is that it does not tell you much about what you need to do to get better.

The Expectation Gap

More elaborate methods for measuring customer satisfaction dig deeper into how services meet or exceed the expectations people have of them. When you ask customers how satisfied they are and include the context of the time and place of their activity, you are more likely to identify the points where a new design has really made a difference and where the greatest potential for improvement lies.

The challenge with this method is that it requires a systematic approach to measuring customer satisfaction over time. You need to build measurement into the design of the service.

Where we have seen the greatest impact from measurement is in the organizations that routinely ask their customers how satisfied they are with each interaction with the company. These organizations build their own software that triggers surveys after a service interaction, aggregates data across channels and over time, and feeds it back to managers and staff. They also establish routines for reflecting and acting on the results.

This kind of commitment to customer satisfaction from an organization expands the purpose of measurement beyond proving the worth of design and enables cultures of continual improvement. When clients take this commitment seriously, designing the measurement of a service becomes an integrated part of designing the service itself.

Case Study: Gjensidige’s Measurement System

SERVQUAL and RATER

In the 1980s, marketing researchers Valarie Zeithaml, A. Parasuraman, and Leonard Berry developed a service quality framework called SERVQUAL.6 It was created as a method to manage service quality by measuring gaps between what organizations intend to deliver and what they actually deliver, as well as between people’s expectations and their actual experiences with a service.

SERVQUAL can be a great tool both for measuring and designing services, particularly the simplified version commonly known by the acronym RATER, which measures gaps between people’s expectations and experience along five key dimensions:

  • Reliability: the organization’s ability to perform the service dependably and accurately
  • Assurance: employees’ knowledge and ability to inspire trust and confidence
  • Tangibles: appearance of physical facilities, equipment, personnel, and communication materials
  • Empathy: understanding of customers and acknowledging their needs
  • Responsiveness: willingness to help customers, provide prompt service, and solve problems

The great thing about RATER for service designers is that these dimensions easily translate into design principles that can be used in projects.

In a call center, for example, empathy can be translated into a design principle that states “Always make sure you have understood the customer’s problem.” The design solution might include in the script for staff a sentence that starts with “Let me check that I understand you correctly . . .” Another solution might be a confirmation e-mail template that starts with “Describe the customer’s problem in her/his own words.”

Using RATER as a foundation, it is possible to start design projects by defining measurement criteria. From here you can develop design principles and design solutions, and finally, measure the effects of the design. This process completes the loop of measurements and design of a service, but sometimes the consequences of a design reach beyond the service itself and affect society and the environment. This is when you need to apply broader frameworks of measurements, such as the triple bottom line.

The Triple Bottom Line

A useful measurement framework, both to provide design direction and to evaluate results, is the triple bottom line. The concept grew out of the sustainability field and was coined by John Elkington in the late 1990s.7 The basic concept of the triple bottom line is that an organization should be measured not only by its financial performance, but also by the ecological and social outcomes of what it produces.

The model challenges the idea that companies are responsible to their shareholders only, and states that organizations are responsible to their stakeholders—anyone who is directly or indirectly affected by the actions of an organization.

The triple bottom line is particularly useful when working with public sector organizations whose ultimate goal is to improve society, but it is increasingly useful in the private sector. The transportation, healthcare, and energy industries are obvious sectors that have responsibility to communities and to the planet that must be part of the accounting when a new service is designed.

In practice, the triple bottom line is a useful framework when defining the goals of a design project. It helps broaden the scope of the thinking and often challenges clients in a productive way to reflect on the greater ambition of their work.

To return to the Streetcar example, the company’s proposition was focused on convenience and economy, but the ecological benefits of car sharing also featured as a selling point for customers. Car sharing also addresses the social need of mobility—it helps people get to work, see friends and family, and go shopping at IKEA (you would be surprised how often this goal comes up in user research about mobility).

When the triple bottom line was used to measure the results of Streetcar after it became a mass-market proposition, the service produced results along all three dimensions (Figure 8.4).

Images

FIGURE 8.4
Streetcar produced results both for the company and for customers: it is a profitable company that saves money for customers, takes cars off the road and greenhouse gases out of the air, and makes it easier for people to combine the use of cars and public transportation.

It is also important to mention that it is not always necessary to account for results purely in terms of numbers. When describing environmental and social effects, sometimes the value to people and the planet are better described in words. It can take a lot of unnecessary work to put a number on the value people derive when a service makes it easier for them to get around, but it can be strongly argued that such a service improves their quality of life, especially when you tell their stories.

Summary

  • Service designers and service providers both have a need to prove that design provides a return on investment. It is important to gather your “before” data before you launch in order to be able to contrast this information with the after-launch data to see which design activities have worked or not.
  • The metrics should relate to the touchpoint experiences you are trying to improve rather than arbitrary top-down metrics; otherwise, you are not comparing apples to apples. The service blueprint can provide you with a framework that can define what touchpoint interactions should be measured and in what way.
  • It is important to measure across the time of the customer journey—not just an individual touchpoint experience in isolation—as well as across channels. Individual touchpoints can score high customer satisfaction ratings, but they also set up expectations for the transition to another touchpoint. These transitions are important to take into account because they make up a large part of the service experience.
  • Measurement data can be shared with staff as a performance indicator, which aligns the customers’ interests with those of the staff. It also provides motivation for an organization’s employees.
  • Measurement can (and should) take into account the triple bottom line of economic, environmental, and social impacts.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.137.198.183