Chapter 20

A Pre-Flight Checklist for Successful Neuromarketing Studies

In This Chapter

arrow Reviewing pitfalls that lead research studies astray

arrow Seeing how to avoid those pitfalls

arrow Working with research partners to get the results you’re looking for

Not all neuromarketing studies turn out well. Indeed, not all market research studies turn out well. Often, this sad outcome has nothing to do with the methods involved, but it can be traced to the age-old problem identified most memorably in the great Paul Newman movie Cool Hand Luke: a failure to communicate. Or, as expressed in the wisdom of computer programming: garbage in, garbage out. The best research intentions can be thwarted by poor communication and planning well before a study is even launched. In this chapter, we tell you how to avoid five pitfalls that can ruin your neuromarketing study. We address each of these pitfalls in the form of a question you need to answer before you launch a study. We call this a pre-flight checklist. Like a pilot preparing for a flight, you need to answer these questions before you take off, not after.

What Are Your Business Objectives for This Study?

Here’s an ironic scenario that happens too often in neuromarketing: Marketers at some company become intrigued by the idea of studying their consumers’ brains, so they decide to do a pilot project. They pick a neuromarketing vendor and commission a project to “test a couple ads” or “see what people think about our new package design.” When the results come back, the previously intrigued marketers are puzzled because the study doesn’t seem to tell them much. There are lots of esoteric metrics and measures, but no critical business questions seem to be definitively answered. They aren’t sure how to explain the study to senior management. The marketers become disillusioned and conclude that neuromarketing isn’t for them. The neuromarketing firm goes home, and the company goes back to focus groups and surveys.

What’s ironic about this scenario? A group of marketers with a healthy initial interest in neuromarketing end up losing interest because they tried to test the methodology without defining a business objective they cared about.

Pilot projects like this are difficult for a neuromarketing firm to resist. After all, turning down a new client is tough. But a better approach is to encourage the client to find a real business objective to study with neuromarketing. If the client finds it difficult to identify and match up an objective with a neuromarketing approach, the vendor should suggest an educational session to give the client a better picture of what questions neuromarketing is best suited to address. If the client still can’t find a relevant business objective, it’s probably best to politely decline the engagement. Time and resources of both parties are better expended on a project more likely to yield repeat business.

tip.eps For marketers and other potential neuromarketing clients, our recommendation is this: Don’t start your first project just to test neuromarketing — you’ll only be disappointed. If you’re curious about neuromarketing and you want to understand it better, invest in an educational session. Several reputable neuromarketing consultants provide training for just this purpose. Don’t pay for a full-blown research project until you have a real business problem to address — one that engages your team and your senior management — and then only select neuromarketing if it’s the right fit to help you solve that problem.

Business objectives are important because they shape almost every decision that follows: what hypotheses to test, what specific tests to perform, what materials to test, what target audience to sample, and so on. In addition, senior management expects a meaningful business objective. Like every department in today’s corporations, marketing must justify its contribution to the company; it must be able to show its return on investment (ROI) for every initiative it undertakes.

Marketing initiatives generally don’t get funded if they aren’t based on a sound business case supporting the required investment. This business case links the marketing initiative to the business and financial goals of the company, aligning the initiative with higher-level business purposes. If a research study is going to address questions that the company ultimately cares about, it needs to be embedded in an active marketing initiative with a well-understood business purpose. The business objective of the research project provides a strong connection to marketing strategy and broader business purposes. After that connection is made, it becomes much easier to define, design, execute, interpret, and communicate the study.

Business objectives for successful neuromarketing studies usually begin with a key business purpose such as improving brand equity, defining new products, creating compelling advertising campaigns, or improving in-store or online presence. In other words, business objectives are closely tied to the application areas detailed in Chapters 9 through 14. If your study can show value in addressing one of these key business areas, it’s likely to make a meaningful contribution to your company’s research program and objectives.

The more general the objective, the less useful it’s likely to be. “Testing some ads,” for example, is not a good objective. “Determining whether a new campaign is surpassing the emotional impact of the current campaign” is. “Testing the strength of brand associations” is not a good objective. “Establishing whether the new ad campaign effectively delivers on the top three brand attributes” is.

tip.eps Here are some questions to help identify strong business objectives for your neuromarketing study:

check.png Under what marketing (or other department) initiative will this study be conducted?

check.png What are the business purposes supported by the marketing initiative?

check.png How, in broad strokes, can this study help its sponsoring initiative achieve those business purposes?

check.png What specific questions can this study answer that directly address the purposes of its sponsoring initiative?

check.png What are you confident you already know about this study and what else do you need to know?

check.png What are your expectations or preferences regarding the outcome?

What Hypothesis Are You Testing and What’s the Best Test to Use?

It’s surprising how many neuromarketing studies get commissioned without having a clear hypothesis to test. The absence of a hypothesis is probably the number-one cause of client dissatisfaction with neuromarketing research. People can’t help but be disappointed when an expensive and time-consuming project results in vague findings that can’t be tied to any specific research question. But this is exactly what happens when you conduct a study just to see “what people are thinking.”

Neuromarketing findings in the absence of a hypothesis may be interesting, but they don’t relate to business needs. They may not even be interesting. What should a marketing team do with the knowledge that an ad activates a certain amount of memory processing? Two key questions immediately follow: “Compared to what?” and “So what?” Only a clear hypothesis gives meaning to these important follow-up questions.

The best way to start developing an interesting hypothesis is to look closely at the business objectives of the study. Hypotheses tend to emerge out of expectations driven by the business objectives.

Let’s use an example to illustrate this situation. Suppose package designers in a food product company have developed two new package designs for a well-established grocery brand. Their business objective is to select which package design to use. They decide that they should test both design options to see which one performs better. They probably have expectations, based on their experience with package designs, about which design will outperform the other on some key neuromarketing metrics, such as novelty, processing fluency, emotional engagement, or brand association. This allows them to create their initial hypothesis: Design A will outperform design B on the specified performance metrics.

But shouldn’t the new designs be tested against the existing packaging as well, given that any new design needs to perform better than the current one? This makes sense, because the business won’t want to incur the expense of a package upgrade if it doesn’t believe the upgrade will improve performance on the shelf. This generates a revised hypothesis: Design A will not only outperform design B, but also outperform the current package design.

Now the design team realizes that testing the two new designs against each other and the current package still isn’t adequate to meet the business objectives. Once on the shelf, the new packages won’t compete with each other or with the current design. Whichever design gets chosen will have to compete for shoppers’ attention with the packages of competitive brands. So, a further revision is needed: Design A will attract attention on the shelf better than design B, better than the current design, and better than competing packages that will surround it on the shelf.

In this example, admittedly a simplified one, you can see how a study design can be refined from a vague goal of “testing some new package designs” to a much more specific hypothesis. In a real-world example, this hypothesis would be refined even further. For instance, if the business objective of the new design was to revitalize the brand to attract a younger audience, the hypotheses would need to include that segmentation element. If the chief purpose of the new design was to refresh the brand’s image with current habitual buyers, then the hypothesis would be refined differently, to emphasize the need to maintain key design elements and aid recognition while avoiding any disruption of existing buying habits.

When the hypothesis for a study has been selected, the next question to ask is whether you’ve identified the proper tests for evaluating the hypothesis. Your neuromarketing partner will play a major role in this task, because it usually comes to the project with a set of testing protocols it has developed for testing hypotheses like yours. But you still should ask probing questions at this point, because there are many variations in testing that may, in fact, be more or less appropriate for your specific needs.

For example, what’s the right test for the third hypothesis, testing against competing packages? Should the alternative packages be tested in isolation, or in shelf layouts where all the competing packages are displayed together? Should you use a monadic testing design (each participant sees only one package design or shelf layout) or a sequential monadic design (each participant sees a sequence of packages or shelves, usually in a randomized order), or perhaps a forced-choice design (in which package alternatives are displayed together)? Can the test be effectively conducted with images on a computer screen in a lab, or do you need to test the alternatives in a real in-store environment?

tip.eps Defining the right test to evaluate a particular hypothesis can drill down into very detailed issues quite quickly, but here are three high-level questions to get you started:

check.png What aspects of the consumer experience are most relevant to testing this hypothesis: responding to a marketing message, choosing in a shopping or other purchasing context, or consuming a product?

check.png What mental responses of the consumer are most relevant to the experience being tested: impressions, reactions, motivations, or learning?

check.png What attributes of the material being tested are most relevant: novelty, familiarity, emotional impact, processing fluency, goal activation, memorability, or some other attribute?

Are You Testing the Right Materials?

When you’ve determined the hypotheses you want to test and the right tests to perform, the next challenge is making sure you have the right materials, or stimuli, for the test. Answering this question may seem like a no-brainer, but collecting and preparing the right materials for a neuromarketing study can involve some choices that may end up making or breaking the outcome of the study.

First, if you’re using visual images or videos for your tests, a number of technical factors need to be considered. Because neuromarketing techniques can be very sensitive to the visual properties of stimuli, you must be careful about issues like image and video resolution, video aspect ratios, and even more esoteric topics like NTSC versus PAL video formats. Expect your neuromarketing partner to have standards for visual stimuli, but don’t be surprised if these standards prove harder to meet than you would expect.

We’ve found, for example, that although marketing departments tend to have at their disposal high-resolution images and videos of their own products and ads, they don’t have comparable resolution stimuli for competing products or ads. Often, the first inclination is to “pull them off the Internet,” but these materials tend to be of lower quality than the professional-grade materials required.

Comparing visual stimuli at different resolutions introduces serious potential distortions into a neuromarketing test. Recall that many nonconscious reactions derive from ease of processing, or processing fluency. If you compare a higher-resolution image to a lower-resolution image, the resulting difference in processing fluency may bias the results, especially if you’re using sensitive brain measurement technologies like electroencephalography (EEG) or functional magnetic resonance imaging (fMRI). Participants’ conscious minds may not register the difference, but their nonconscious responses may be affected by the different resolutions.

tip.eps Here are some technical stimulus-quality issues to keep in mind:

check.png Visual materials being compared should be at the same level of resolution and quality.

check.png Marketing materials being compared should be at the same stage of development; that is, you shouldn’t test a production ad against a pre-production ad.

check.png When testing materials in a real-world environment, like a store or showroom, care must be taken to control the amount of background clutter and competing stimuli in the surrounding environment.

check.png Descriptive written materials should be at the same level of semantic complexity to control for differences in processing fluency of language.

A second important aspect of testing the right materials is the question of balancing what varies between the stimuli being compared against what is controlled or held constant (not allowed to vary between stimuli). A good place to start is to be clear about whether your hypothesis requires you to test different elements within the same stimulus, like different fonts or product names on essentially the same ketchup bottle, or different overall stimuli, like different ketchup bottle shapes and designs. You would need to prepare very different materials to test these different hypotheses.

One of the most common mistakes in neuromarketing studies is to fail to properly align hypotheses with testing materials. If the hypothesis focuses on the effect of altering a single element in a stimulus, for example, but the available stimuli include other differences as well, testers often make the erroneous assumption that those materials will be “good enough” for testing the hypothesis. But in fact, such materials are not good enough. Once the results are in, it will be impossible to tell if measured differences are attributable to the element of interest, or to some difference between the stimuli. This is exactly the kind of compromise that produces results that dissatisfy everyone.

There is a third aspect of selecting materials for testing that should be mentioned. This relates to maximizing the inferential power of the test, or the amount of confidence you can have in the results. This topic is complex, so we only touch upon a few key points here — that different types of comparisons, using different materials, yield different levels of inferential power:

check.png The weakest level of inferential power is achieved when testing a single stimulus against a standardized measure (a measure that is scaled against a normative database or benchmark).

check.png Stronger inferential power is achieved when you compare two or more stimuli against each other and against a standardized measure.

check.png A higher level of inferential power is achieved when you compare a specific attribute across multiple stimuli in which other possible explanatory elements have been held constant (but the scope of this inference is limited to the attributes tested, so there is a trade-off here).

check.png It’s always preferable to include competitive stimuli (for example, competitors’ ads, if you’re testing ads) in any comparison when possible, because including competitors’ materials provides broader scope and greater business relevance, as well as greater inferential power.

Are You Sampling from the Right Population?

The next point at which good neuromarketing studies too often go bad is in defining the sample to be tested. Your initial inclination may be to make the test as generalizable as possible by using a gen pop sample (a representative sample of the general population in your market). This is often the best solution when doing a survey study, because after the data is collected you can always use any question in the survey to create and compare subgroups, and you can plan to have enough respondents (survey participants) in your sample to make sure these subgroup comparisons are statistically meaningful.

Unfortunately, this approach won’t work well for a neuromarketing experiment, which differs from a survey in one very important respect: In an experiment, you have to identify your relevant subgroups before you run the study; you can’t construct a new subgroup comparison after the data have been collected. So, whatever comparisons you want to make (based on the hypotheses you want to test) need to be built into the study design from the start, and the sampling has to reflect the comparisons chosen.

Neuromarketing studies, especially those that use brain measurement techniques, can often be conducted with relatively small sample sizes, because there is less random noise to be “averaged out” of the brain measurements than in typical survey responses, and experimental controls are designed to further minimize random noise. But these small sample sizes are based on the assumption that the group being measured is homogeneous, which is to say, it won’t be further subdivided later on in the analysis of the results. If you want to compare two groups, you have to specify them as part of your design, and you have to double the sample size to accommodate the comparison.

This is why it’s important to plan your sampling carefully. If your target market consists of 18- to-34-year-old males, and you want to compare their responses to two ads, sample only from 18- to 34-year-old males. If your business objective is to attract more 18- to 34-year-old females to your product, and you want to see how their responses to your ads compare to men’s, sample from that group as well, but you’ll need to double your sample size.

tip.eps Here are some subgroup comparisons that can yield interesting and relevant insights in neuromarketing studies. In planning your sampling criteria, consider whether any of these distinctions should be built into your study:

check.png Men versus women: Gender comparisons often reveal interesting differences, especially in nonconscious responses, that can be leveraged in marketing.

check.png Younger versus older: Cognitive differences in developing, mature, and aging brains can result in different responses to marketing.

check.png Brand loyalists versus brand agnostics: Understanding how loyalists and agnostics differ in responses to marketing can help if the business objective is to increase loyalty.

check.png Current consumers versus lapsed consumers: Lapsed consumers may provide clues in their nonconscious responses to how they can be lured back into the fold.

check.png Consumers from different regions or geographies: Regional differences can often be critical in responses to marketing.

check.png Consumers at different stages of involvement: From first-time buyers to habitual users to dedicated advocates, comparing consumers at different stages of involvement can yield interesting and useful insights.

remember.eps In neuromarketing studies, defining your sample as precisely as possible before the study is conducted is ideal. This is often counterintuitive to marketers who are used to working with surveys that achieve greater generalizability with larger sample sizes. Experiments require more planning ahead and are less flexible for discovering interesting differences after the fact. This is another reason why strong hypotheses are so important to successful neuromarketing studies. Having only an undifferentiated gen pop sample, combined with a vague business objective and no hypothesis, is likely to produce hazy results with little decision-making value.

How Will Your Results Change Your Business Actions?

In his book The Power of Intuition, decision-making expert Gary Klein recommends an exercise he called a pre-mortem. He suggests that every time executives are ready to make a strategic decision, they sit down and imagine a future point in time in which the decision they’re about to make has turned out to be a total failure. Then they should ask themselves how this failure could’ve happened, and what they’ll do next. The power of the exercise is that it helps overcome the powerful bias that decision makers have to believe their decisions will be a success.

We suggest a similar exercise for marketing teams preparing a new research study: Sit down and ask what next steps you’ll take based on the results of this study. If your hypothesis is confirmed, what will you do? If it isn’t confirmed, what will you do differently? An honest appraisal of the likely effects of the study you’re designing is an excellent way to counter your natural optimism bias and see whether you’re embarking on a project that will have real business impact or just be another fun exercise in the research sandbox.

To be fair, not every study will produce earth-shattering business implications. And most companies won’t make major business decisions based on any single study, especially a single study that uses neuromarketing techniques for the first time. So, perhaps what you’ll do with the results of this study is compare them with other results you’ve gotten from other research, using other approaches. That’s a perfectly acceptable conclusion as well. It means you’ll want to review what you’ve learned from those earlier studies, and make sure the current study produces results that are directly comparable. In this case, it’s the larger research program, not any single study, that carries the burden of business impact.

remember.eps The only outcome you need to worry about in this exercise is the one in which you fail to identify any meaningful business implications resulting from your study. If that’s the result, you need to revisit all the previous questions about business objectives, hypotheses, tests, materials, and sample definitions, and ask whether you’ve missed something important. If you still can’t find any business actions that will be impacted by this study, then you have to face the very real possibility that this study isn’t worth doing.

Don’t Pay the Price of a Failure to Communicate

Working through this pre-flight checklist should help you avoid some of the most dangerous pitfalls along the route to a successful neuromarketing study. Although nothing is guaranteed in the cruel world of market research, we’re confident that having clear business objectives, precise hypotheses, the right tests, the right materials, the right sample, and clear business consequences will give your study a much higher likelihood of success than if any of these elements is missing.

Completing the pre-flight checklist prior to launching a study is something you should do together with a neuromarketing partner. An experienced partner will be intimately familiar with each of these questions, will recognize their importance, and will be willing to invest the time and effort to help you answer them to your satisfaction. If a potential partner tells you it’s more important to “just get something done” than to spend time answering these questions, that’s a good indication that you may not be working with the right neuromarketing partner — which just happens to be the topic of the next chapter: how to pick the right neuromarketing partner.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.119.103.96