CHAPTER 18
Yes, But Can You Prove It's Working?

A man hears the sound of an air rifle being fired. He follows the sound. He finds a small boy with a hopelessly dilapidated weapon, barrel bent, sight broken. There are some hand-drawn targets on a wall, and all the boy's shots are right in the middle of the bull's-eyes. ‘That's amazing’, says the man. ‘How do you shoot so well, with such a useless gun?’ ‘Easy’, says the boy. ‘Doesn't look easy’, says the man. ‘It is’, says the boy. ‘I draw the targets after I've taken the shots’.

Hopefully the financial services marketing moral is obvious: hitting your targets isn't difficult if you don't decide what they are until afterwards.

But who would do such a thing, we hear you cry. Wouldn't that be a tad unprofessional? Well, yes … but that doesn't mean it doesn't happen.

In researching this chapter about marketing measurement, your authors have access to a unique and invaluable source of material: 16 years'-worth of entries to the Financial Services Forum Marketing Effectiveness Awards. Each year since 2002, firms have been selecting their best and most effective marketing activities and writing them up as entries for the awards, following the instructions in the entry pack which guide them on how the scoring system works. Altogether, over 300 firms have submitted a total of over 1500 entries. And though there have been hundreds of winners, many of them genuinely outstanding, it also has to be said that overall, if these are the marketing activities that firms have picked out for their very most impressive and demonstrable records of effectiveness, we'd hate to see what the rest of their activities look like.

This chapter draws extensively on the experience of running and judging these awards, because it provides such clear insight into some of the industry's best and worst habits when it comes to effectiveness. Let's begin with the scheme itself – revised and refined somewhat in its early years, but for several years now sticking to a set of largely fixed and consistent benchmarks for measuring effectiveness.

To be clear, the scheme's judging process is designed simply and single-mindedly to award marketing effectiveness, no more and no less. (Unlike most communications awards, the judges are under instruction not to take any account of creative excellence or originality.) The categories have varied a little over the years but at the core are:

  • Advertising
  • Content Marketing
  • Customer Experience
  • Customer Loyalty & Retention
  • Digital Marketing
  • Direct Marketing
  • Integrated
  • Internal Communications
  • New Product, Service or Innovation
  • Public Relations
  • Social Media

There are also three Judges' Special Awards, for Best Consumer Insight, for Marketing Learning and for Marketing Excellence.

As in most awards schemes, the winners are chosen by a panel of judges – typically about a dozen financial services marketing luminaries of taste and discernment, plus both your authors (joke). But, again unlike most awards schemes, the judges aren't there just to express opinions and prejudices. We're there to assess, as accurately as we can, the extent to which entries deliver against the marking system – a marking system that is applied consistently across all the categories, and that has been designed specifically to measure what the activity actually achieved.

There are 400 marks available altogether, allocated unequally across seven sections, each headed with a question. These seven questions, together with the number of marks available for each, are:

1. What was the issue or challenge facing the business? (50 marks)
2. What was the insight that underpinned your strategy and tactics? (50 marks)
3. What was your proposed strategy to address the issue or challenge? (25 marks)
4. How did you execute the strategy? (50 marks)
5. What metrics did you put in place to track the effectiveness of your solution? (25 marks)
6. How can you prove that your strategy met its objectives? (100 marks)
7. What value was added to your business as a result of the strategy? (100 marks)

You'll notice that half the marks – 200 out of 400 – are available for the measures of achievement, and in fact it's a little over half – 225 out of 400 – if you include question 5, which asks entrants to itemise the measurement techniques used. What's more, the earlier questions – as well as carrying fewer marks – are really there mainly as scene-setting, to provide the basis on which the answers to the last two questions can be assessed. Entrants are given a paragraph of guidance on how to tackle each question, as follows:

  1. What was the issue or challenge facing your business?

    Articulate the problem, challenge, project or opportunity. Describe the business environment in which the marketing activity was completed. Only by giving a clear, well-articulated and quantified description of the issues at outset will the judges be able to determine the overall effectiveness of your marketing activity. It never ceases to amaze the judges how few entries quantify their objectives.

  2. What was the insight that underpinned your strategy and tactics?

    How and why did you establish your approach to solve the original issue? What drove the decision to focus your resources on this marketing activity? Maybe it was insight from external research, or a collection of comments from your colleagues. Provide a clear explanation of the insight and how it was developing your strategic thinking.

  3. What was your proposed strategy to address the issue or challenge?

    Provide an explanation of the overall marketing strategy being used. The strategy might encompass a number of marketing campaigns for a brand (maybe not specific to the category being entered) but your description here will hopefully clarify why certain activities were performed in question 4.

  4. How did you execute the strategy?

    Describe in detail, using images if necessary, the marketing activity completed. Whether an innovative approach, or just doing the basics well, showcase the tactics or elements of the campaign that you feel delivered the greatest impact.

  5. What metrics did you put in place to track the effectiveness of your solution?

    Give a clear description of the controls used to measure the effectiveness of your marketing activity. Were metrics and objectives set at the start of the campaign, giving a clear definition of effectiveness over a set period? How well do these metrics complement the category being entered?

  6. How can you prove that your campaign strategy met its objectives?

    Tell us how your marketing activity met, or exceeded, the original objectives. Where possible, provide the data related to all the metrics you had in place to measure effectiveness. How well does this go toward resolving the issue – or meeting the opportunity – set out in question 1? Can you match the marketing activity specifically to the results being achieved? Is there clear evidence that the activity was cost-effective, with greater revenue than cost?

  7. What value was added to your business as a result of your strategy?

    As well as the short-term cost-effective benefits of the marketing activity provided in question 6, the judges are looking for evidence of long-term value-add to the business. The very best entries go beyond the results of the marketing activity, to explain the additional benefits achieved for the organisation as a whole. What long-term impact will this activity have on your business or the industry sector? Give a complete definition of the value to the business, with quantification where possible.

We reproduce this questionnaire firstly in the context of our discussion of the awards, but also with a broader purpose in mind. It seems to us that any team setting off on any kind of marketing initiative would do well to tackle the task with these seven questions in mind, knowing that when the task is completed they'll need to be able to answer them.

In the awards judging, though, the performance of the entries against them can only be described as mixed. Across all the entries, the average mark out of 400 is a little over 200. The lowest scores are typically around 120 (although one 2016 entry managed a truly remarkable 85), and the highest are between 300 and 350: an entry scoring 350 will almost certainly be a winner, and an entry scoring 300 has an excellent chance.

The average number of marks varies a good deal between categories. In general, the highest averages are to be found in the categories where either gathering detailed performance measures is intrinsic to the activity (for example Direct Marketing and Digital Marketing), or budgets are big enough to justify significant expenditure on measurement techniques such as econometrics (for example Advertising and Integrated). Vice versa, the lowest average scores are to be found in the categories where, how can we put this, the least marketing rigour tends to go into planning the activity in the first place: Sponsorship and Corporate Social Responsibility stand out, with Content Marketing not far behind.

Overwhelmingly, the principal weaknesses of the poorer entries are vagueness and lack of detail. Many start vague and imprecise, and end the same way. A few include absolutely no hard measures of effectiveness at all.

Here are some short verbatim quotes from the 2016 entries showing what this means in practice.

(From an answer to Q1 on the challenge facing the business):

Our primary objective was to maintain our presence in the IFA [Independent Financial Adviser] sector.

(From an answer to Q7)

Across the IFA market, sentiment toward the range as a whole has improved.

(No details)

(from an answer to Q6)

[The product] has been highly praised by advisers and … clients like the simplicity of the approach

(No details)

(From an answer to Q1)

We wanted to build our reputation among professional and consumer audiences

(No details)

(From an answer to Q7)

We were delighted with the overall result, which answered our objective of raising our profile.

(No details)

(From an answer to Q2)

Engagement – Distributing relevant content to our audience and engaging them was paramount

There are literally hundreds of examples of vague, imprecise statements of objectives and summaries of results like these across the entry forms.

It should also be said, though, that there's a similar number of well-researched, well-documented and well-written entries, most if not all of which find their way onto the judges' shortlists and the best of which will eventually win the awards. By way of examples, we include three of the winning entries from the 2016 Awards as an Appendix.

While the biggest problem afflicting the weaker entries is, quite simply, a lack of available data to define their objectives or to substantiate their effectiveness, it's sometimes apparent that entrants have encountered more particular problems in attempting to answer the questions. The biggest of these is confidentiality. Entrants are allowed to use indexed, or percentaged scores – ‘sales increased by 63%’ or ‘call volumes were 135% of target’. But we know from our experience as awards entrants, as well as judges, that clients can be very sensitive about even semi-disguised results like these.

The other most frequently-occurring problem is the difficulty in attributing performance to particular elements of the marketing activity. If sales did increase by 63%, and call volumes were indeed 135% of target, was this the effect of a brilliant direct marketing campaign, or was it more because the entrants had slashed the price by 50%? Or a combination of the two, and if so in what proportions?

Your authors, neither of whom is an expert in more advanced measurement techniques, suspect that statistical methodologies with challenging names such as regression analysis and econometric modelling may well be able to cast light on such complex questions. But such techniques, if indeed they do have a role to play, seem to be beyond the capabilities of most of those writing the awards entries.

More often than not the two broader problems that shine through the least satisfactory entries are easy enough to identify:

  1. Few, if any, objectives were set for the activity in the first place (or, similarly, those that appear in the entry have the same distinctly post-rationalised flavour as the targets drawn by the boy with the air-rifle).
  2. Very few, if any, specific and tailored metrics were put in place to measure the performance of the activity. If the entry reports on any metrics at all, they're often either largely irrelevant ones that have the advantage of being available free of charge, like Google Analytics, or completely irrelevant ones that the firm happens to have commissioned anyway for some quite different purpose, as for example when internal staff surveys are pressed into service to measure the performance of external communications campaigns.

It is of course possible to sympathise with both these problems, which are by no means always easy to solve or avoid.

The challenge with objective-setting is coming up with any plausible way to determine what the objectives ought to be, or what good objectives might look like. In sales-oriented campaigns, it's sometimes possible to do something based around the calculation of ROI – if we're spending £100,000 on an activity, we want a result that delivers an acceptable return on that investment, like say at least £110,000-worth of business. But on closer examination even this is often thin, and usually raises at least as many questions as it answers: is £110,000-worth of business a good result? What was the opportunity cost - could we have spent that £100,000 more effectively? Is it a real figure when you factor in less visible costs, like the time of the people involved?

And of course many, if not most, marketing activities are either not directly sales-related, or only partially sales-related, or only partially responsible for the sales that may be achieved. We are resolutely unconvinced, for example, by methodologies that claim to attach an ROI to less tangible objectives such as brand awareness. And we can think of at least a dozen reasons to explain how a firm was able to generate sales of £110,000, 11 of which have nothing to do with the £100,000-worth of marketing activity.

Surprisingly, these objective-setting problems are particularly tricky when it comes to digital activities. This is for two contrasting – you might almost say opposing – reasons. The first is that so much of the available performance data is horribly unreliable. Some of the figures on digital advertising performance, for example, are really almost entirely fictional. There is little point in targeting a million views if the viewers in question are Chinese schoolchildren paid 50 cents a hundred. But conversely, when reliable and detailed digital performance data are available, it's horribly difficult to know what they mean or what value should be ascribed to them. How important are email open rates? Or unique visitors? Or page impressions? Or dwell rates? They may be indicators of good and valuable behaviours. Or, quite likely, not. On a number of occasions in PR entries, we were proudly told that a particular piece of trade press promotion resulted in opportunities to see (OTS) in excess of the adult population of the UK…

The challenge with relevant measures is a combination of our old friends (or old enemies?): time and money. It's almost always – maybe even always – possible to put relevant metrics in place, and obviously doing so is hugely important and beneficial for your objective-setting. If you're able to frame your objectives very precisely against totally relevant measures, then framing them becomes immensely much easier. But measuring things takes time and costs money, and when both are already in scarce supply for the marketing activity you have in mind, diverting a chunk of the little that's available into measurement is painful. It's no coincidence that in general, the biggest, most tailored and most robust measurement programmes are usually attached to the most expensive marketing activities: the best example is big-ticket consumer brand advertising. There are three complementary reasons why firms spending many millions year after year on a heavyweight media campaign will almost always put in place a brand tracking study to measure its performance:

  1. In relative terms, it feels affordable. If you're spending £20 million a year on television, diverting less than 5% of that amount into a classic Hall & Partners tracking study doesn't seem unreasonable.
  2. The firm can be reasonably confident that the measurement will be worthwhile, and will show that the campaign is having at least some measurable effect. (By contrast, the authors know of a syndicated study in the asset management world that reports, at six-monthly intervals, that the advertising awareness of some syndicate members remains, as it's always been, in a range between 0% and 1%. It's difficult to understand the value of this information, or why participants in the study would keep stumping up their share of the research cost twice a year.)
  3. If we need an annual sign-off from the Board for a recurring £20 million budget cost line, we're going to need to present some kind of evidence that we're producing something for our money.

None of these points applies with anything like the same force to lower-budget, less visible or more tactical activity. In the worst case, the cost of putting robust metrics in place can be as much as the cost of the activity itself, especially when addressing business audiences: reducing a budget for an activity by 50% in order to channel the other 50% into measuring the proportionately reduced effects would be a decision reflecting a level of rigour alien to pretty much all marketers. Similarly, going to a lot of trouble and expense to measure the performance of short-term, ad hoc activities that probably made little impact anyway feels more than a little pointless.

Against that, the only sure way that a firm can build up norms and benchmarks of its own, against which it can direct and relatively easily measure its future activities over time, is by measuring the things it's doing now. Of course you have to draw the line somewhere – but where? Spending half the total budget on measurement is silly. But how about a quarter? Or an eighth?

While we think about measurement principally from the perspective of cost, there's another dimension to take into account. We've focused so far very specifically on the cost of using research to measure the effectiveness of a marketing activity. But many would argue that using research and measurement tools only at this stage is foolish, unprofessional and indeed downright risky: that if you're intending to submit the performance of the activity to robust measurement, it's imperative to take an equally robust, research-driven approach to the development of the activity in the first place.

Take a simple example, such as a lead-generation direct marketing campaign. As a starting-point, the obvious performance metric, which will (probably) be readily available, will be the number of leads actually generated.1 If we're looking to do a particularly thorough job, there are other elements we could add. It would be extremely valuable to know how many of those leads actually do convert to sales, of course, to make sure that we haven't just generated response from numbers of tyre-kickers. We could usefully profile and analyse the respondents to find out more about them, and to what extent they match our intended target market. And down the track, we could look at these buyers' stickiness: do they stay with us and make further purchases, or have they moved on within months?

But long before we get to any of this, those responsible for this and subsequent campaigns are likely to ask what insight is available to help them develop a more effective approach next time. This is a fair question, not least in the context of assessing the performance of those individuals in producing whatever they produce. In our creative agency days, both of your authors felt thoroughly uncomfortable when placed in a situation where our work would be judged entirely on results, but we had no research or insight available to inform the work we were producing. If we were relying on guesswork all the way through the development process, we felt, it seemed unreasonable to fire up a whole dashboard of quantitative measures to assess the quality of our guesses. Or to put it in the language of that boy with the air rifle, if we're just taking shots in the dark it's hardly fair to turn the lights on afterwards to see how close we've come to the target.

Of course in this particular example – a lead generation campaign – it may well be that the direct marketing methodology itself is designed to provide the insight that's needed. Rather than carry out initial consumer research, it may well be that the team develops a best-guess solution that it takes into live testing, very likely against a control version that has achieved the best results previously. If the campaign is a digital one, it will usually be possible to learn from a wider range of test executions and variables. In such cases, it's understood that the project is in itself designed at least in part as a learning experience, and those responsible know they have permission to fail – although in our experience of the agency side, it's not a good idea to take advantage of this permission too often.

Still, the general point about adopting a research-and-measurement-based approach stands. If those responsible for any marketing activity know that rigorous measurements will be made of the effectiveness of the activity, they're right to insist, as far as they're able, that equally rigorous research should go into the planning of the activity. And much more often than not, that'll have implications for timetables and budgets too.

So far in this chapter, we've focused almost entirely on the issue of measurement of marketing activity, by marketing people, for the benefit of marketing people – the sorts of metrics we need to help us to do our jobs well. Arguably, though, there are at least two other levels where measurement matters.

One is to do with overall, big-picture, ideally single-number research intended to track customers' general level of satisfaction with their experience of the firm. Of these, the long-standing favourite has been the CSAT (Customer Satisfaction) score, while more recently we've seen NPS (Net Promoter Score) become extraordinarily popular and fashionable, but, we'd say, start to decline in popularity over the last year or two.

CSAT doesn't actually refer to any single research approach, but in fact is an umbrella term used to describe a variety of (quantitative) methodologies used to measure, fairly obviously, how satisfied your customers are. NPS is a more singular thing, referring to research specifically providing a single number – the percentage of customers saying they're willing to recommend your product or service to a family member or friends, minus the number saying they're not willing to do so.

As relative measures that can be tracked over time we have nothing against these methodologies, or indeed against a growing number of less well established approaches jockeying for position. (Very recently, for example, we've seen a cluster of firms adopting the CES technique, which apparently measures the Customer Effort Score.)

However, while useful as relative measures, we'd warn strongly against thinking of techniques like these as providing any kind of objective reality. Using such blunt instruments to explore the subtleties of the workings of the human mind will always lead to unreliable outcomes. One of your authors, for example, no matter how delighted he may be with a financial service, will always tick the box on the NPS questionnaire that says he will ‘definitely’ not recommend it to a family member or friend. This is an entirely truthful answer, but only reflects the fact that he doesn't think of himself as the kind of person who goes round recommending financial services to people.

Anyway, simple research measures like these, especially those which lead to a single-number key finding, are often intended to address the second additional measurement need that exists in many firms. This is the need for marketing people to demonstrate the effectiveness of their activities to colleagues outside the marketing department, and, most importantly, to their firms' senior management and Boards. This is a problematic area, and an important one. We think this is one of the very most important and necessary areas for improvement in the new financial services marketing.

A recurring theme of this book is the continuing low status of marketing within many, if not most, financial services businesses – a low status perfectly expressed in that over-used phrase, the colouring-in department. We've argued that the single most important reason for this perception is simply that on the whole marketing really hasn't been as central to the success of most financial services firms as it has been in so many other parts of the consumer economy – just try describing the marketers at Nike, Procter & Gamble or Amazon as the ‘colouring-in department’.

But it seems equally clear to us that the second most important factor that depresses the perceived status of marketing, not far behind the first, has been the continuing failure of marketers to express the commercial value of what they do in terms that are meaningful to non-marketing colleagues – and, perhaps most of all, to colleagues in functions that are typically most remote from marketing, like risk, actuarial, finance and IT.

Improving our performance in these areas isn't easy. Not many directors of finance or IT will want to sit through all the 90 slides in the quarterly brand tracking presentation, and even if they did they'd retain a fair amount of scepticism when it came to more abstract and highly qualitative measures like Brand Salience and Emotional Temperature. At the opposite extreme, though, reducing all the complexity down to a monthly email to the Exec, announcing that this month our NPS has improved from minus 16 to minus 14, is if anything even less useful. What do these figures mean? Are they good? Why have they moved? Without a good deal of supporting insight, we're very little further forward.

Of course there is huge variation in the quality and quantity of marketing measurement tools used by marketers to give an account of their effectiveness to their senior management colleagues, and some do a superb job of presenting meaningful, hard and compelling business metrics to their colleagues. But especially in light of the fact that marketing expenditure is often one of the largest budget items needing Board approval, it's remarkable how thin, anecdotal and generally unfit for purpose a great many firms' measures can be.

For all these reasons, we put measurement very near the top of the list of areas for improvement in the new financial services marketing.

NOTE

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.223.206.69