Chapter 17

Neuromarketing on a Budget: Inexpensive Ways to Learn from Your Customers

In This Chapter

arrow Setting up and running behavioral response-time studies

arrow Using online services and “gamification” to test marketing materials inexpensively

arrow Conducting do-it-yourself behavioral experiments

arrow Deciding how to trade off the costs and benefits of different types of neuromarketing studies

Not all neuromarketing studies require high-end machinery and a team of PhDs to coax insights out of your customers’ brains. In this chapter, we look at some less-expensive options that you can implement yourself, commission from an online partner, or carry out with the assistance of a neuromarketing consultant.

We begin with a discussion of behavioral response-time studies, a simple technique that is easy to prepare, is easy to score and interpret, and yields reliable results about nonconscious associations with brands and products. Next, we consider some new online tools that take techniques previously confined to the lab, or previously impractical, and make them available as convenient services on the Internet, including online eye tracking, facial-expression analysis, and prediction markets.

Then we look at some inexpensive ways you can carry out behavioral experiments, both in stores and online. Using behavioral economics principles, simple experiments can identify new opportunities and predict the impact of sales and product choice strategies. Finally, we look at the general problem of balancing costs and benefits when comparing different neuromarketing approaches, and suggest some guidelines for making the right choice when comparing more- and less-expensive options.

Running Response-Time Studies

Behavioral response-time studies require no sensors, specialized labs, or complex data analysis algorithms. Yet they can reveal quite a bit about how knowledge and concepts are tied together in consumers’ minds.

Seeing the logic of response-time studies

Response-time studies are based on properties of the mental mechanism we discuss at length in this book — priming. Everything we experience, including our internal thoughts and what occurs in the external world around us, is processed by our brains as a predictor. As part of interpreting every input, the brain asks, “What’s next?” Through a process called associative activation, related thoughts and concepts are made more accessible, so they can be brought into conscious thought more quickly if required.

Response-time studies take advantage of this property of priming. Things that are associated in memory with something we’re experiencing become more accessible, so measuring the amount of time it takes to access them is an indicator of how associated they are. If two items (images or words) are shown in succession, and a person is given some behavioral task to perform with regard to the second item (like pressing a button), he or she will perform that task more rapidly and more accurately the more connected those two items are in his or her mind. The first item is called the prime, and the second item is called the target.

Three types of response-time studies can provide valuable insights for marketers:

check.png Semantic-priming studies: In semantic priming, if the meaning of the target is associated with the meaning of the prime, the target can be processed faster and more accurately. For example, if you associate the brand Apple with creativity more than reliability, after seeing the prime Apple, you’ll be able to process the target word creative faster than the target word reliable. (Generally, the words are separated by less than half a second, and the average differences in response times are on the order of 50 to 150 milliseconds.)

check.png Affective-priming studies: In affective priming, if the emotional valence (positive or negative) of the target is in the same direction as the prime, the target can be processed faster and more accurately. For example, if you have positive feelings about the brand Apple, after seeing the prime Apple, you’ll be able to classify the word sunshine as a positive word faster than you’ll be able to classify the word terror as a negative word.

check.png Implicit Association Test (IAT): This test uses affective priming in a more complicated way. Instead of classifying individual targets after seeing individual primes, the task is to classify them together. For example, to test implicit associations with Coke and Pepsi, a part of the test (there are several parts) would have you classify targets that are either images associated with Coke or Pepsi or positive or negative words, into two categories, Coke-or-positive or Pepsi-or-negative. If you have stronger positive associations with Coke, you'll be able to assign both Coke images and positive words faster to the Coke-or-positive category. (There is more to the IAT than this; we recommend checking out http://implicit.harvard.edu/implicit to get a fuller explanation and try an IAT yourself.)

All these tests are designed to measure implicit or automatic associations, not conscious, considered choices. So, they’re particularly useful when you suspect that direct questioning may be vulnerable to response biases that would distort results.

technicalstuff.eps In Chapters 5 and 7, we discuss the distinction between associative priming and motivational priming. Associative priming activates associations, while motivational priming also activates conscious or nonconscious goals, which then can result in goal-pursuit behavior, as described in Chapter 7. When semantic priming and affective priming are used in response-time studies, we’re leveraging their associative-priming properties. Although marketers are generally more interested in triggering motivational priming in real-world marketing situations, the response-time effects of associative priming are what’s being used in these techniques.

Measuring implicit brand attitudes with response-time studies

The IAT has been used extensively to study implicit brand attitudes and has been found to produce similar results to explicit attitude tests for noncontroversial topics. But more important, for attitudes that people may be reluctant to reveal in interviews or surveys, such as opinions about brands that are associated with temptation, impulsiveness, or indulgence, the IAT can give more accurate readings.

The IAT is particularly useful for comparing pairs of brands that are natural counterparts or alternatives, because it measures the strength of attitudes in relative, not absolute, terms. There are also versions of the IAT that can measure responses to single brands if a natural counterpart is not available.

Here are some examples of IAT brand and consumer behavior studies that yielded interesting results:

check.png An IAT study of preferences and consumption of low and high-calorie foods found that explicit and implicit attitudes matched only for low-calorie foods. For high-calorie foods, implicit attitudes were much more positive and also were better predictors of actual food consumption.

check.png An IAT study of attitudes toward a popular clothing retailer found that implicit attitudes didn’t match explicit attitudes, but it did a better job of predicting shopping intentions than explicit responses.

check.png In a study of celebrity voices in advertising, researchers found that when participants explicitly rated ads narrated by celebrities they liked, they discounted the impact of the celebrity, but when measured with an IAT, their implicit attitudes toward the ads were significantly influenced by their attitudes toward the celebrities.

check.png In a comparison of Mac and PC computer users, IAT results were consistent with survey results measuring explicit attitudes, usage, and ownership, but the IAT results showed much greater response-time differences for Mac users, implying a significantly stronger association with the Mac brand, a finding that was not revealed by explicit measures.

Setting up and deploying an IAT is relatively simple if you use an online partner and template to get started. For example, a company called Millisecond Software (www.millisecond.com) has a web-based tool called Inquisit that can be used to create IATs and many other types of response-time tests and run them with online participants on the Internet. The company provides scripts that you can download and customize with your brand-specific information. It also provides instructions for how to analyze the output data in a spreadsheet program to generate results.

technicalstuff.eps Our purpose in writing this book is to provide a general reference and not to compare neuromarketing vendors, so for the most part we avoid mentioning vendors or providing information about how to reach them. In this chapter, we make a bit of an exception to this rule because we want to show you examples of some of the solutions we’re describing and give you a starting point for further exploration on your own. So, where appropriate, we mention example vendors with regard to a particular approach. This is for illustrative purposes only and doesn’t mean we endorse any vendors mentioned or that there aren’t other qualified vendors out there who may provide similar services.

Measuring semantic and emotional connections with response-time studies

Semantic-priming studies use response times to measure the strength of associations between words, concepts, and imagery. Setting up a semantic-priming study is even simpler than setting up an IAT. The most basic design displays a series of paired words or images (a prime and a target) with a short pause between each pair. The participant is given a distractor behavioral task to produce responses. A common task is to press one button if the target word is a real word (like table or quilt) or a different button if it is a pseudo-word (like toble or quelt). This forces the subject to read every word before making a choice, giving you an unbiased response time for the pairs of words you’re really interested in.

A semantic-priming study usually has about 100 to 200 pairs of words, with about 25 percent pseudo-word targets to keep the participant engaged. For the rest of the pairs, you provide words or images that represent the connections you’re interested in testing. For example, if you want to test associations between five competing products and five product attributes, you can create a script in which each product and attribute is paired multiple times (as a general rule, plan for five repetitions of each pair to smooth out response variations), add in the pseudo-word pairs, randomize the order, and load the stimuli into a script using a tool like Inquisit (see the preceding section) or another response-time program.

Two key elements for getting a meaningful semantic-priming response are the length of time each word appears on the screen and the length of the pause between the prime and the target. For exposure, 200 to 500 milliseconds (two-tenths to five-tenths of a second) is common. For the intervening pause, between 100 to 200 milliseconds is about right; associated activations are strongest in this time frame, after which they quickly decay.

Semantic-priming studies have been used to test implicit associations with different types of products. For example, one study looked at how different global concepts were associated with the words Coke and water in consumers’ minds. Researchers found the words nature and mystery to have the largest response-time differences when primed by Coke, with nature more connected to Coke for men and mystery more connected to Coke for women.

The ability to probe implicit semantic connections can help marketers in two ways: to discover new connections with their products and brands that they may not have known existed, and to monitor how well connections they’re trying to communicate are getting established and reinforced in consumers’ minds.

In affective priming, the automatic emotional response to the prime is inferred from the speed with which the target word is classified as emotionally positive or negative. The target words are selected to have unambiguous positive or negative meanings (for example, words like glorious, smile, grief, or curse), so the task is easy. The real purpose of the task is to see how long this easy classification decision takes, given the prime that preceded it. If the prime activates positive emotional connections, positive target words will be classified faster than negative target words, and vice versa if the prime activates negative emotional connections.

Affective-priming studies are set up very similarly to semantic-priming studies. Pairs of words are created, presentation is randomized, exposure times and prime-to-target intervals are set (100-millisecond intervals are best for emotional priming), and between 100 and 200 trials are presented. For affective priming, you have a smaller number of primes, and you want to be sure each of these is presented at least ten times with both positive and negative target words. It’s often a good idea to include neutral primes among the primes of interest to avoid monotony effects from repeating primes and create a control condition.

In published academic studies, affective priming has been used mostly to study implicit attitudes in the political realm, including attitudes toward candidates, issues, and groups. This work has been adapted by specialist market-research firms (for example, Sentient Decision Science [www.sentientdecisionscience.com]) for studying marketing stimuli, including brands, advertising, and products.

remember.eps Both semantic priming and affective priming are proven and easy-to-implement methods for measuring implicit associations with brands and products. Most researchers agree that emotional connections are more important to monitor closely, because they’re more powerful predictors of attitudes, choices, and behaviors. A cost-effective approach is to use affective priming to test whether you’re connecting to the right emotions, and then use semantic priming to test whether you’re communicating that connection effectively.

Leveraging Online Services to Tap Into the Wisdom of Crowds

Over the last decade, more and more research services have begun to migrate to the Internet. The first step was the establishment of online panels of consumers ready and willing to answer survey questions on just about any topic. Today, thousands of panels are available to researchers, promising to deliver results from the most specialized micro-interest groups to the most general population panels representing diverse regional, national, and global populations.

Recently, new research services have begun to appear on the Internet and on mobile devices that offer innovative measurement solutions derived from biometric and response-time methodologies, as well as crowdsourcing solutions that bypass the biases of individual opinion surveys by asking participants to predict marketplace outcomes rather than their own future behaviors. These services provide very cost-effective research alternatives, compared to custom neuromarketing projects using dedicated labs and methodologies, and they’re worth considering by anyone looking to try out neuromarketing on a budget.

Activating the webcam: Online eye tracking and facial expression analysis

As high-resolution webcams became standard features on almost all personal computers and mobile devices, enterprising technology entrepreneurs realized these video cameras could be used to capture eye tracking and facial expressions for research purposes. Although these systems don’t provide the precision and advanced features of lab-based hardware and software, they represent a growing segment of the neuromarketing field and can be a good choice if your needs match their current capabilities.

Online eye tracking has been available since 2010. Services are very easy to use. Here's an example process from one online company, EyeTrackShop (www.eyetrackshop.com):

1. You submit the materials you want to study to the vendor, typically static images or web pages.

2. The vendor builds a mock-up for the test, which you approve.

3. The test is deployed to respondents matching your recruiting criteria.

4. Respondents receive an e-mail announcing the test, go to a website, and provide permission to use their webcam.

5. Conditions like lighting and head position are checked, gaze patterns are calibrated, and if everything is working properly, respondents begin the test.

6. Respondents’ gaze patterns are tracked while viewing the stimuli. A traditional survey questionnaire may be added at the end of the test.

7. The vendor analyzes the data and returns a report to you with graphics and statistics, including heat maps, area-of-interest fixation times, gaze path, and comparisons with questionnaire answers.

Webcam-based eye tracking has some compelling advantages. Turnaround times are fast, averaging about five to seven business days for a typical study, with some vendors offering 48-hour turnaround for expedited studies. Costs are low, estimated to be about one-third the cost of an equivalent lab-based study, according to one vendor. And perhaps the biggest advantage is that studies can be run anywhere in the world where a panel participant can be found with a computer, a webcam, and an Internet connection. This allows marketers to reach audiences that would be impractical or impossible to test in any other way.

There are some limitations to online eye tracking as well. The spatial resolution of webcam-based eye-tracking software is about half the resolution of dedicated eye-tracking equipment, and the data collection rate is slower because it’s dependent on connection speeds and the processing power of the computer being used, so fast saccades (eye movements) cannot be captured with this approach. Also, online solutions tend to be better for static images than videos. All these limitations are rapidly evolving as computers and video cameras get more powerful and Internet connections get faster.

The value proposition for online facial expression analysis (also called facial coding or facial imaging) is quite similar. Vendors are easy to work with, and material can be submitted, prepared for study, deployed, and tested in short turnaround times. Results typically include scores for a variety of discrete emotions. Online facial-coding vendors like nViso (www.nviso.ch) and Realeyes (www.realeyesit.com), for example, provide moment-to-moment scores for discrete emotional states that more or less match emotion expert Paul Ekman's seven basic emotions (introduced in Chapter 16): happiness, surprise, sadness, fear, anger, disgust, and contempt, plus a placeholder for neutral states. Studies are inexpensive and can be fielded to large samples of participants in diverse locations, with results available in days rather than weeks.

As with online eye tracking, there are limitations with online facial expression analysis compared to expert facial-coding approaches. According to Paul Ekman, automated facial coding inevitably takes shortcuts compared to his Facial Action Coding System (FACS) taxonomy of facial expressions and is currently able to achieve about 70 percent accuracy in identifying emotions, as compared to over 90 percent accuracy for a trained FACS analyst. Another limitation, noted in Chapter 16, is that facial expression analysis can’t measure muscle movements that occur below the threshold of visual observation. To measure at that level requires applying sensors on the face, which takes the methodology out of the realm of web-based passive techniques.

remember.eps What is still missing from webcam research solutions is the integration of these two capabilities — eye tracking and facial expression analysis combined in a single web-based application. Although a fully integrated solution has not yet appeared (as of mid-2013), vendors from both camps are well aware of the added value of such a solution, and development efforts are rumored to be underway. Meanwhile, vendors are working together to provide interim solutions. EyeTrackShop, for example, offers integration through partnerships with two facial-coding vendors, allowing its clients to choose which they want to use, if they have a preference.

Using “gamification” in online research

Gamification is not a specific type of study, like eye tracking or response-time studies; instead, it’s a way of presenting studies. Gamification emerged in online survey research as a solution to the problem of making surveys more engaging and fun. By applying some of the features of games to the data collection process, gamification has been used to increase response rates, completion rates, engagement, and accuracy for online surveys. Gamification features now common in online surveys include the following:

check.png Framing data collection tasks as challenges

check.png Creating win conditions and offering rewards

check.png Displaying competitive rankings on leaderboards

check.png Awarding badges as symbols of accomplishment and reputation

check.png Displaying status and progress on social networks

There is a vigorous debate in the online research world as to whether the increased engagement of “gamified” surveys creates biased responses that render survey results non-generalizable. There are good arguments on both sides of this debate, but they all revolve around the question of how gamification features impact conscious responses, which are more susceptible to response biases, compared to nonconscious responses, which are less likely to be biased by the gamification features.

For studies that measure nonconscious responses directly or try to suppress conscious correction of nonconscious responses, gamification is a natural way to maintain interest and consistency while disguising the true purpose of the study. Engagement becomes not a bias, but a planned distractor that improves the reliability and validity of the test. Here are some examples of gamification applied to the measurement of nonconscious responses in online testing:

check.png Behavioral response-time studies: Creative contexts for response-time measurement include “shooting gallery” and “target practice” games in which the prime and target stimuli appear as elements in the game and targets are chosen with the keyboard or mouse. A variation is the “visual search” task, where participants have to pick out a target image in a grid of distractor images. Response times are direct measures of the degree to which the target attracts bottom-up automatic attention.

check.png Forced-choice studies: Adding time limits or distractions to forced-choice tests has been found to be a good way to suppress conscious deliberation. Studies conducted by research firm BrainJuicer (www.brainjuicer.com), for example, have found that adding time limits and distractor tasks to a package-preference choice task resulted in significantly higher selection rates for simpler, less demanding designs.

check.png Recognition studies: An innovative way to measure features like processing fluency or familiarity is a recognition task in which the object is slowly transitioned from completely blurry to sharp focus, with the participant picking the transition point at which the object becomes recognizable. This type of test can also be used to measure the visual salience of images and designs, as discussed in Chapter 13.

check.png Memory retention studies: An online version of the classic card game Concentration, in which a player flips over pairs of tiles to reveal objects whose location the player must remember, can be used to measure variations in memory retention for different products or brands.

Although nonconscious response tests aren’t yet offered as a built-in capability by online survey companies, we believe they soon will be. As an interim solution, it’s possible to build a test using application development software and then access the test via an exit or hyperlink from a traditional online survey. A key requirement for response-time games is to ensure accurate timing measurement when capturing responses. Many PC clocks are unreliable at millisecond response times, so specialty software like Inquisit (see the “Measuring implicit brand attitudes with response-time studies” section, earlier in this chapter) may be needed to compensate for this deficiency.

“Crowdsourcing” with prediction markets

Crowdsourcing is a recently coined term that refers to the collecting and aggregating of views from a large number of people (the crowd) to choose a preferred course of action. One of the most relevant examples of crowdsourcing for research is prediction markets (online marketplaces for making predictions based on consumers’ beliefs about possible future outcomes). Consumers don’t record their own opinions, as they would in a traditional survey; instead, they use the (virtual) buying and selling of options or shares as a way to express what they think other people believe will happen in the future. The method takes advantage of a property of human judgment that we discuss in Chapter 15: the fact that people are more accurate when they predict what others will do than when they predict what they themselves will do.

technicalstuff.eps Prediction markets operate on the same principle as a stock market. Say a company wants to test three new product concepts. Consumers are invited to “buy” shares in the concept they expect to win (however the market defines winning). After each round of buying and selling, the “price” of each share is a measure of how all the participants in the market have invested in the available options. Each participant can then revise his or her investments in light of these rising and falling prices. Eventually, a concept emerges as the preferred one, attracting more investment than others.

This approach has shown superior results in a number of experiments in which one group of consumers participates in an online prediction market and another group responds to a traditional survey asking for personal opinions. Researchers believe the greater accuracy of prediction markets is a function of the feedback provided by the buying and selling behavior of others, which translates into additional information that participants use to adjust and focus their own investments. As the market as a whole coalesces on a preferred choice, it often shows an uncanny ability to predict the actual result that later appears in the real marketplace.

Prediction markets can be easily set up using online providers like Inkling (www.inklingmarkets.com), a service that enables creating and managing both public and private online markets for various purposes. A research provider that has done interesting work in product concept testing is BrainJuicer (www.brainjuicer.com).

Conducting Do-It-Yourself Behavioral Experiments

Sometimes the best way to test a neuromarketing hypothesis is to conduct a simple behavioral experiment. Because behavior is the ultimate outcome of activity in the brain, behavioral experiments can provide direct evidence of how impressions, evaluations, goals, and decisions that occur in the brain translate into actual behaviors in the world.

tip.eps Of the many kinds of experiments described in this book, those derived from behavioral economics are the easiest to design and implement, because they focus on the influence of situational factors, which are easy to set up and control, and measure as outcomes consumer behaviors (usually choices or purchases), which are easy to observe, count, and compare.

Setting up and running behavioral experiments

There are two contexts in which simple behavioral experiments make sense: in a retail store and online. In-store experiments are conducted by retailers and product marketers all the time — for example, every time they move products around on the shelf — but these adjustments are often ad hoc and informal. More controlled and powerful experiments can be implemented using the basic principles of experimental design (summarized in Chapter 19) to test key elements of the marketing mix in any shopping environment.

Here are some examples of experiments (some are mentioned elsewhere in the book) that can serve as models for do-it-yourself tests in retail settings:

check.png Testing the effect of background music: The wine test described in Chapter 12 is a good example of controlling the stimuli in a test to isolate the effect of one environmental factor on sales — in this case, the effect of French versus German music on French versus German wine sales.

check.png Testing the threshold of choice overload: The jam varieties test mentioned in Chapter 12 illustrates how to test for the effects of variety on choice behavior. Similar experiments can be designed to test the effect of categorization on choice in different product classes.

check.png Testing the effect of scent in the air: In a behavioral experiment described by research firm BrainJuicer, scent dispensers were installed in two matched lingerie stores. Over a period of six weeks, scents were introduced in one store but not in the other for one week; then the treatments were reversed the next week. At the end of the experiment, sales were compared between the two stores. This simple and elegant design effectively controlled for a wide variety of extraneous factors that could influence sales in addition to scent in the air.

check.png Testing product adjacencies: In an experiment described in several trade-show presentations, a snack-food company compared aisle and product sales across multiple grocery stores when chips and dip were placed together in the aisle or kept separate. Using experimental design to balance other factors that could influence sales, they discovered that co-placement of these related products resulted in an average increase of 7 percent in dip sales and 3 percent in overall aisle sales, creating a persuasive case for co-placement for both the snack company and the retailer.

check.png Testing the effect of product bundling: In his book, Decoded: The Science Behind Why We Buy (Wiley), Phil Barden describes many clever behavioral experiments, including a series of experiments conducted in school cafeterias to test the impact of different food presentations on eating behavior. One notable finding was that bundling healthy desserts as part of the price of the lunch, but charging separately for unhealthy desserts, resulted in a 71 percent increase in healthy dessert consumption and a 55 percent decrease in unhealthy dessert consumption.

tip.eps The key elements of good in-store experiments are

check.png Clear hypotheses

check.png Precise definitions of the environmental features you’re comparing

check.png Unambiguous behavioral outcomes

check.png Good controls to minimize the influence of extraneous factors on your results

Thanks to the availability of automated testing tools and the inherent flexibility of web-page design, online experiments are even easier to set up and run. Using testing tools like the online experiment manager Optimizely (www.optimizely.com), you can quickly and easily set up testing scenarios — for example, moving around the placement of your "Buy Now" button — and allow the testing software to take care of the details, such as keeping track of randomly presenting each alternative design, adding up the behavioral results for each option, and preparing comparative statistics so you can see what works best.

Testing behavioral economics principles in real-world settings

To get the full benefit of do-it-yourself experimentation, you need a clear understanding of what you want to test, as well as how you want to test it. This is where behavioral economics, with its emphasis on heuristics (judgment and decision-making shortcuts) and biases, can provide guidance for identifying candidate scenarios for experimental testing.

In Chapter 8, we list six prominent heuristics that have been identified by behavioral economists (there are, of course, many more heuristics, but these are some of the most well-known). To illustrate how behavioral economics principles can guide the formulation of simple experiments, let’s take another look at each of these heuristics and see how they may help generate useful hypotheses for experimental testing:

check.png Loss aversion: This is the principle that people don’t value gains and losses equivalently. Consider testing whether presenting an offer as achieving a gain versus avoiding a loss leads to differences in sales.

check.png Anchoring: This is the principle that people tend to compare things in relative, not absolute, terms. Anchoring is especially relevant to pricing strategies and the order in which alternatives are presented. Consider testing whether demand for a premium product increases when it’s presented next to an even higher-priced alternative versus when it’s presented as the most expensive option.

check.png Framing: This is the principle that context matters, sometimes more than content. Framing is particularly influential when products are placed in a promotion versus prevention frame. For example, consider testing whether consumers are more likely to buy a product that “enhances energy” versus “prevents fatigue.”

check.png Default bias: This is the principle that people tend to choose defaults rather than make active selections. Consider testing with regard to presenting product extensions, for example, comparing sales of a product with the warranty included by default versus one for which the warranty must be added by making an additional decision.

check.png Affect heuristic: This is the principle that consumers often use affect (positive or negative feelings) as a substitute for making cognitively demanding logical calculations. Consider testing whether adding likable imagery (smiling faces, puppies) to a complex choice task changes choice behavior.

check.png Endowment effect: This is the principle that people treat things they have as more valuable than things they don’t have. Recent research on the endowment effect has found that it also increases to the extent that an object is perceived as having any personal meaning, even if someone doesn’t own it. So, consider an experiment in which a product is accompanied by a “background story” in one condition and not in another. Does the story increase the price consumers are willing to pay?

As is clear from these brief examples, behavioral economics is a fertile field for marketers to generate insights and hypotheses for do-it-yourself experimentation. Unlike more complex neuromarketing alternatives, these experiments can often be managed without requiring the intervention and added expense of a neuromarketing partner.

Balancing Costs and Benefits in Neuromarketing Studies

Looking at the wide range of neuromarketing solutions described in this chapter and in Chapter 16, you may be wondering how it’s possible to navigate through so many possibilities and select the right approach to match both your research needs and your budget.

We address this question from a technology point of view in Chapter 18 and from a partnering point of view in Chapter 21, but we want to make a couple comments here from a cost-benefit point of view.

In most technology-based fields, the more technology you apply to a problem, the more precise, understandable, and action-relevant information you get as a result. In medicine, for example, you can get pretty good and relatively cheap information from a physical examination, but you can get much more accurate and actionable information from a CT scan.

remember.eps In neuromarketing, this equation of advanced technology, higher cost, and better understanding hasn’t yet been achieved. It’s somewhat ironic, and a clear challenge to the field, that the less technologically advanced approaches are often the easiest to understand. As complexity increases, metrics and measures seem to get more obscure and harder to interpret, not less.

People find it relatively easy to grasp how response-time studies work, and what the results mean. They understand how eye tracking works, and why it’s important to know where people are looking and how long they’re looking there. They know it’s better for an ad to produce smiles than frowns. Similarly, they “get” most biometric measures: It makes sense that we open our eyes wider when we’re surprised, our hearts race and our palms sweat when we’re excited, our pupils dilate when we’re interested, and so on.

But the more advanced approaches in neuromarketing are still struggling to make their metrics meaningful and actionable. Few marketers walk into a first meeting with an EEG or fMRI neuromarketing specialist knowing anything about brain waves or blood flow to regions of the brain with unpronounceable names, nor do most marketers know why they should care about these things.

Given that computing power in the modern world will continue to increase at an exponential rate, all the simple and inexpensive solutions described in this chapter will inevitably continue to get smarter, faster, and cheaper over time. If the more advanced neuromarketing technologies want to compete, they have to translate their technological superiority into business cost-benefit terms that their clients can understand and will be willing to pay for.

tip.eps What’s the potential buyer of neuromarketing research to do? We suggest a few basic guidelines:

check.png Keep an eye on the web-based services. Limitations that may have made them noncompetitive six months ago may be gone today.

check.png Don’t rely on vendors to tell you what’s wrong with other vendors. Rely on your own research, or hire a neutral consultant to help you sort out the alternatives.

check.png Follow a disciplined selection process to decide the best approach and cost-benefit trade-off for you (see Chapter 21).

check.png Don’t be afraid to try some simple experimentation on your own. Sometimes the easiest-to-implement experiments yield the most valuable results.

check.png Be wary of vendors who cite technology reasons for charging higher prices. Demand business impact reasons so you can properly balance costs with achievable business benefits.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.224.54.168