11

MARKET RESEARCH: FROM PREDICTING TO TRACKING

IN 2007, TEN thousand people around the globe were asked about portable devices—digital cameras, cell phones, MP3 players, and so on. It was part of a massive study conducted by the global media company Universal McCann. One of the hottest topics at the time was the first iPhone, which was announced in January but hadn’t yet been released.1 Once the researchers who conducted the study tallied the results, they reached an interesting conclusion: Convergent products like the iPhone are desired by consumers in countries such as Mexico or India, but not in affluent countries. “There is no real need for a convergent product in the U.S., Germany and Japan,” the study stated.2

A researcher who was involved in the study explained that users in affluent countries would not be motivated to replace their existing gadgets. “The simple truth: convergence is a compromise driven by financial limitations, not aspiration. In the markets where multiple devices are affordable, the vast majority would prefer that to one device fits all,” he told the Guardian.3

There’s a growing feeling among marketers that something is not working with market research. Marketers spend billions of dollars on research every year, but the results are mixed at best. Some of the problems are not new and relate to the basic challenge of using research to predict what consumers will want (especially with respect to products that are radically different). But the problem gets even more difficult for O-Dependent products. There are several issues, but at the most fundamental level, O-Dependent marketers face one additional key problem: Market research usually tries to measure P, but decisions are increasingly based on O.

Participants in market research studies typically indicate their preferences without first checking any other information sources. But as we have discussed, this is very different than the way people shop in reality today. In the Universal McCann study, for example, people were asked to say how much they agree with the statement “I like the idea of having one portable device to fulfill all my needs.” Indeed, there was a significant difference between the percentage of people who completely agreed with this statement in Mexico (79 percent) and in the United States (31 percent). So in theory, people in the United States were much less excited about the idea of a phone that’s also a camera and a music player.

But it was a different story when people got closer to making a decision. They heard about the iPhone in the media (declaring it a revolutionary device).4 They saw reports on TV of people standing in line all night to get their hands on the first iPhone. And they started reading blogs and reviews from real users. As iPhones started rolling into the marketplace, the abstract idea of “having one portable device to fulfill all my needs” was replaced by actual reports from people who used it. Users started to experience—and share—the advantage of having 24/7 access to a camera, or not having to carry an iPod in addition to a cell phone.

It’s easy to blame the market research firm for this, but this is not our point. We are trying to explain the inherent difficulties in assessing consumers’ reaction in this new era. First, as we just discussed, more decisions today are impacted by O, whereas market research measures P. But let’s go beyond that: As we discussed, consumers have limited insight into their real preferences. This is especially true with respect to products that are radically different. Universal McCann correctly reported what they found. What market researchers often underestimate, though, is the degree to which consumers are myopic and have difficulty imagining or anticipating a new and very different reality. (Consumers tend to assume they’ll continue to like what they like now, and show no appetite for things that look very different.) What makes the task of a market research firm even trickier is that just as consumers’ expectations may be wrong (as was the case with the iPhone), there are many cases where industry expectations about what consumers will buy are wrong.

Even when market research techniques are administered in groups (for example, focus groups), it is not their purpose and they are incapable of predicting the behavior of consumers under the influence of other people. For example, focus groups (their known limitations aside) don’t reflect other sources that consumers access in today’s reality, such as expert opinions, reviews, and other information services. A question that naturally arises is how predictive is individual, disconnected market research when individuals’ future perceptions, preferences, and actions are greatly influenced by information that will be acquired from O.

Consider, for example, conjoint analysis, which is often used to estimate how consumers value different product features.5 Think of a guy named Jim who agreed to participate in such a study. He is presented with several product combinations and is asked to make some choices: Do you prefer a Samsung laptop with 2 GB of RAM, 80 GB hard drive, and 15.6-inch screen? Or would you rather have an HP with 4 GB of RAM, 60 GB hard drive, and 11.6-inch screen? After many similar questions that require Jim to make such choices, the market research firm uses sophisticated statistical techniques to derive the relative importance of different attributes.

This is all very nice. But what happens in reality when Jim is ready to buy his next laptop? He goes on CNET, Amazon, Decide.com, BestBuy.com, gdgt.com, or similar sites to read what others have to say. He’s naturally attracted to the laptops with the highest ratings and scores (which are usually the first thing you see on these sites). When he starts reading reviews, he may be sidetracked by a new feature or consideration. A friend on Facebook posts something about her new laptop that takes Jim in yet a different direction. In short . . . O kicks in and takes over.

The problem is that conjoint and other preference measurement techniques ask people to make choices or rate options based on their current beliefs, without engaging in the kinds of information acquisition they would do in reality if they were actually buying the product. Not to mention that O-sourced information is often much more dynamic and constantly being updated, so even if a researcher were trying to somehow account for the present effect of O, that may become largely irrelevant and out of date by the time actual purchase decisions are made. Also, beyond the unpredictability of O’s influence, decisions made under the influence of O are much “noisier” and unpredictable than hypothetical decisions made strictly by an individual consumer on her own when completing a questionnaire. While a limited set of studied features might be reasonably representative of the factors that an individual consumer will consider, a larger set of reviewers and information sources introduces various unpredictable factors (for example, “coolness,” popularity, highlighting of seemingly insignificant features) that will be difficult to capture in conjoint measurement.

The impact of noise and hard-to-anticipate information sources created by the ability to predict purchase decisions is not unique to conjoint analysis and similarly limits the usefulness of other common research techniques such as brand equity measures or pricing studies. While predicting individual decisions that are made in isolation is not a simple task, predicting the joint evaluations of many consumers and the influences of other information sources is likely to be order of magnitude more challenging.

A MAJOR SHIFT

It’s a cold evening in Cambridge, Massachusetts. People are leaving the Kendall Square Cinema after the 5:30 P.M. showing of Lincoln. As they walk out to the parking lot, snippets of their conversations are heard, and immediately fade away into the freezing air: “The acting was brilliant, but I was glad it was over.” “You’re kidding?” “Day-Lewis was amazing, but . . .”

A few feet away, in a redbrick building adjacent to the cinema, there’s an office of a local start-up that works with conversations as its raw material—not the ephemeral kind from the walkway next door, but the online kind that stays out there for a long time and can be mined and analyzed. The start-up, Bluefin Labs, has forty of the top U.S. TV networks among its clients, including CBS, NBC, and Fox. Up to a few years ago, these networks were limited to techniques such as standard surveys, focus groups, and to data regarding the reach of shows. Now they can also know what resonates with people in real time based on what’s being said on Twitter and other social media. In addition to TV networks, Bluefin is used by advertisers to see what ads resonate with consumers and to analyze their reaction. Advertisers will probably continue to test their commercials before airing them, but once a commercial is on the air, Bluefin lets them detect how it fares in real life, which can be quite different. A few days into the 2012 Olympic Games, for example, it became apparent that a commercial for one of Bluefin’s clients was generating significant adverse commentary. When tested in isolation before it was ever aired, this commercial tested fine, but when it was shown in the context of the Olympics, it raised negative sentiment, which was starting to gain momentum on social media. The client was able to quickly replace the problematic ad.6

The redbrick walls at Bluefin are reminiscent of the industrial past of the building (which used to be a hose factory). Now, instead of workers sweating over heavy machinery soaked in the smell of rubber, the large halls are occupied by industrious young techies searching for insight in big data. Deb Roy, the Massachusetts Institute of Technology professor who cofounded the company, is known for a study he conducted about language development. He and his wife installed video cameras throughout their house, and for three years recorded everything that went on in the house from the moment their son was born. Having such rich data allowed Roy to uncover surprising insights about why certain words are learned before others. For example, the likelihood that his son would say a new word had a strong correlation with how unique it is in space. So the word “bye,” which is closely associated with the entrance to the house, was more likely to be learned early than a word that is said in multiple locations around the house.

In the M*A*S*H conference room (meeting rooms are named after TV shows) a large screen displays what clients at TV networks see in real time—a listing of all shows on the air (even those of competitors). Clicking on a program shows a minute-by-minute level of social media conversation and its sentiment. A client can see which programs get the highest engagement and, within each show, what causes spikes in conversations. In other words, they can take an ongoing, comprehensive, and exceptionally detailed look at O.7 The software is fed by two sources of data. First, there are the millions of comments that are made by viewers about TV shows.8 Second, there’s a video stream consisting of everything on U.S. television. Their software links what’s said publicly on social media to specific moments or events within TV programs (an event can be a play in a game, or a scene within a show, or an ad). Digging further, the user can see what other shows, brands, or topics are of interest to those who engage with a particular TV program.9

Bluefin is one of many companies in the social analytics space that try to gain insight by keeping their hands on the pulse of O. Companies such as Salesforce.com, Visible Technologies, Synthesio, and Attensity offer more general listening platforms that go beyond just the TV industry and allow marketers in a variety of domains to make sense of what’s being said on social media. This area is still maturing and obviously it doesn’t offer any magic solutions, yet the general direction makes sense. While the use of traditional market research to derive long-term forecasts of consumer demand has become more challenging, the current environment does provide marketers more sophisticated and precise tools to track and respond to consumers’ decisions as they occur. It is reasonable to expect that future market research will focus more on within-context predictions and short-term marketer responses and less on long-term preference forecasting.

What happened with the iPhone study is likely to repeat itself. It is hard to predict the success of a product ahead of time by measuring individual consumers’ preferences and then try to use these preferences to predict consumers’ future decisions. Increasingly, the name of the game will be: watch competitors’ initiatives, assess consumer reaction to those initiatives, and react as fast as possible. In the case of the iPhone, the major players varied pretty radically in how well they read consumers’ reaction and, consequently, how fast they reacted. Google, Samsung, HTC, Microsoft, Nokia, and RIM each reacted at a different pace. Samsung, for example, was pretty quick to respond, while Nokia’s CEO admitted as late as 2011 that his company missed big trends and still did not have an answer. “The first iPhone shipped in 2007, and we still don’t have a product that is close to their experience,” he said.10 Nokia should have paid attention to O and acted accordingly. In the case of ASUS’s Eee PC, the major competitors seemed to have reacted rather swiftly. As you recall, Jonney Shih and his team surprised the PC industry with an inexpensive device. Conventional market research was not too likely to predict its popularity, especially since it was adopted by segments they did not target. Acer, for example, even though it initially downplayed the potential of the cheap device, was quick to develop its own netbook. HP, Dell, and Lenovo followed quickly and in fall 2008 all major manufacturers had a netbook to offer.11

We’re likely to see more of that. Trying to predict where things are going has become more challenging. While traditional consumer research can still tell a marketer if their next toothpaste will do better with purple or black stripes, it is not of great help for more radical, unfamiliar changes. There is no effective way to use market research to predict consumer reaction to major changes or new concepts. When assessing new concepts, consumers tend to be locked into what they are used to and believe today, which makes them less receptive to very different concepts and more receptive to small improvements over the current state. Similarly, experts who try to predict the success or failure of radically new products are unlikely to be much more accurate than consumers. (Among other things, experts have famously made bad predictions regarding the success of the telephone, the Internet, and television.) What marketers are often left with is trying to quickly figure out where things are going and what consumers and competitors appear to follow. And then try to offer a better solution. Instead of predicting vague consumer preferences (which may change anyway when it’s time to buy), these days one of the few things a marketer can do is follow O and play along to make the best of a situation they no longer control.

But as we noted earlier, the current environment does not mean the end of market research, just a shift in focus with some silver linings. The current environment and technology make it much easier for marketing researchers to run experiments, adjust, and run the next experiment. Even when absolute values are easier to identify, the manner in which options are displayed and described can make some difference. We are not talking about long-term decisions such as which products to sell, but many small improvements (which can add up). For example, a site such as CarsDirect.com may run an experiment to test the effect, if any, of the cars they highlight on their website, the other cars shown, and the ease of accessing related blogs and reviews. The company could try different display formats by randomly assigning some consumers to different page versions. If differences emerge, the company may replicate the experiment on another day or at a different location, possibly making further adjustments. Once the company determines that the differences in consumer response are stable and robust, the optimal design can be implemented more broadly. This is likely to be an ongoing process whereby the company continues to try different things using trial and error, making adjustments, and then running the next experiment. The cost of such experiments is rather small, and the ability to apply lessons quickly can have an impact on profitability.

MEASURING SATISFACTION

Another evolving area in consumer research is the measurement of customer satisfaction. Conventional wisdom holds that once the consumer has had a chance to experience the product or service, a marketer may follow up with a survey to see how satisfied she was. But think about what we showed in Chapter 6: As better information sources lead to more accurate expectations, the gap between expectations and actual experiences should generally be smaller. In other words, expectations are becoming more predictive of experience and post-sale satisfaction. This could suggest that measuring expectations prior to the experience can actually be more effective, more timely, and more actionable than measuring satisfaction afterward. However, for the same reasons that measuring preferences has become more challenging (due to growing O influence), measuring current (often vague) expectations may not produce accurate predictions of actual satisfaction. More important, using market research to measure both expectations and satisfaction has limited value in a world where up-to-the-minute satisfaction and evaluations ratings of actual users who share their views are so plentiful and easily accessible to marketers.

So marketers can cut their market research budgets and, rather than waste their time on measuring individual consumer’s preferences, expectations, satisfaction, and loyalty, rely on readily available public information. For example, a marketer of high-price, sophisticated cameras can visit websites frequented by the relevant prospective buyers and see what they like, want, and dislike. And instead of asking owners of bread makers about their evaluations and recommendations (after gaining experience), one can simply sample and quantify the evaluations available on key websites where bread makers are sold and reviewed. In other words, measure reviews and other content created by O because it’s ultimately what impacts the expectations and experiences of those considering a product or a service. Another advantage of this approach is its timeliness: Reviews and tools such as Twitter can give an up-to-the minute picture of consumer opinion, whereas survey results can lag behind and quickly become obsolete. For example, a mobile phone that looked perfect when a survey was done may look inferior shortly after some new options are introduced. Measurement of the ultimate customer satisfaction will then often become a lower priority and even redundant.

Bazaarvoice is an interesting company in this context. We earlier discussed the role a company of its kind plays in collecting, moderating, and syndicating reviews. But it also helps marketers gain insight from this content. On any given day, hundreds of Bazaarvoice employees read online reviews and tag the content. For example, if a customer reviews a Samsung TV and comments that the remote control requires a certain feature, it will be tagged with a product suggestion code. When you consider the fact that this is done with thousands of reviews in twenty-seven different languages, you start to appreciate the wealth of structured data that becomes available to a marketer. At the most basic level, a manager at Samsung can focus easily on all reviews of a certain model that are tagged with a product suggestion code to detect things that can be improved.12 Once marketers start to mine the data and look for patterns, they can find interesting trends regarding desired features, additional accessories that might be bundled with a product, or other unexpected things.

At a very practical level, manufacturing defects and other problems can be spotted pretty quickly. For example, not long ago Kohl’s spotted a sharp shift from positive to negative reviews for one of its products. Further investigation detected a problem with a particular production batch.13 In the same way, a couple of years ago 3M detected a sudden outcry about the Scotch Brite Soap Dispensing Dishwand. (“What has happened to your dishwand??” a typical review read. “The little blue cap on the wand will not stay on and all the soap leaks out.”) 3M found an error in the production specs, they pulled the product from the stores, and fixed the problem. Another example: One of Samsung’s refrigerators must be plugged in for six hours before the ice machine works. By monitoring the product reviews, Samsung noticed that many customers thought the machine was broken, which led to a high return rate. The product manager distributed to stores a short video explaining the ice machine feature. Return rates decreased. The social analytics company Synthesio helped the global hotel chain Accor build a listening tool that helps the company track its online reputation. Among other benefits, it helped the chain identify (and fix) a problem with guest keys that were demagnetized by smartphones.14 These types of problems could have been eventually identified in the past by analyzing complaints to call centers or through satisfaction surveys. Today they can be brought to management attention faster.

We’re not talking only about detecting malfunctions. Reviews, user groups, and other forums can quickly highlight user perception that a product does not perform as expected or that its features are inferior. Conversely, reviews can help a company identify rising stars in its product line. At L.L.Bean, for example, a weekly report that goes to management with sales results and back order status also summarizes last week’s reviews by product category, the trend line for each category, and the percentage of products that got four or five stars. A separate report highlights “winners and losers”—specific items that are doing especially well and those that were poorly reviewed. All negative reviews (one or two stars) are distributed on a daily basis to the product managers who are expected to respond by thanking the customer for the feedback, apologizing (when appropriate), offering an alternative, and reinforcing the L.L.Bean guarantee. If an item gets more than six bad reviews, this starts a discussion within the company: Is the product description inaccurate or does the product have a real problem? If it turns out that the problem is consistent and the product has no redeeming value, the inventory is liquidated, donated to charity, or (in extreme cases) destroyed.15

MARKET RESEARCH TO DETERMINE LOCATION ON THE CONTINUUM

Predicting the location of your customers on the influence continuum requires marketers to assess two fundamental factors: diagnosticity and accessibility.16

Diagnosticity is the more important driver (and it also affects accessibility). It refers to the degree to which O is informative (or diagnostic) about your personal product experience. Consider two categories, for example: cameras and investment management. Cameras are fixed items (the product you’re reading about in a review is the same product you’ll use) and chances are that there won’t be great differences between the average of the reviewers’ experiences and yours. In contrast, you may read a review of an investment firm that is based on a reviewer’s experience with an excellent financial adviser. Yet the adviser who’s assigned to you by the same company is not as good, so in this case O is not diagnostic of your personal experience. When there is great variability in a service, O is not likely to be a good predictor.

Market research to determine the diagnosticity of O in a certain category calls for finding out from consumers how useful and informative O is or can be (even if it’s not currently available). One way to find out is to ask consumers through surveys and interviews. The other is to conduct experiments in which one group chooses a product or a service based on current information sources, and another group that also has extensive (but realistic) O sources; the comparison can allow a marketer to determine the potential net impact of O. Considering that O encompasses a variety of different sources, such an experiment can be conducted separately for specific O sources.

We generally believe that where there is a need (that is, where O is capable of providing useful information), it will become available over time. So, if you determined that O can be useful in a category, you can expect it to become more widely available over time, even if it’s currently not available.

Assessing the current accessibility to O can be achieved by observing what’s available out there and by analyzing consumer information search and purchase behavior—determine where people buy, how they buy, what information sources they consider, the sheer number of available reviews and expert evaluations, and so on. Are consumers making decisions on their own or are they reading reviews first? Do they consult with other users on social networking sites? How do they react to information they get from other consumers? Look at both the percent of potential customers who consider information from others, and for those who do, what is the impact of that information on their decisions.

Keep in mind that the availability of reviews, while helpful, is not enough to indicate reliance on O. Nowadays you can find some user-generated content and online reviews for almost any product. We even found some reviews of paper clips on Amazon.com (“It’s a paperclip, yay, it works as described”). Yet the existence of these reviews doesn’t mean that O is important in the purchase decision. There are also categories where people are more likely to talk about than to seek information. Consider fashion accessories. A woman is very likely to show a new scarf or a hat to her friends, but not necessarily seek information prior to purchasing such an accessory.17

Let’s look at a quick hypothetical example for how one would go about conducting research to locate a service category on the O-influence continuum. Alison is an analyst who’s been asked to assess customers’ location on the continuum for a car insurance company. Her first step is to take inventory of what’s available out there in terms of reviews and other quality-oriented user-generated content. She starts searching and cannot easily find too many meaningful reviews. She does find some general articles on how to go about buying car insurance, but when it comes to actual quality assessment of specific companies or agents, there isn’t much out there. The next step for Alison is to determine the existing sources of information that people currently use. Through a survey and by observing consumers, she determines that at the present time, potential purchasers go to the providers’ websites, compare rates, call agents; some talk to their friends. Her conclusion is that, at this time, the process is not very O-Dependent.

Alison’s next step is to find out how useful O information could be if it were available. She conducts an experiment with four groups. One group has access to a couple of review sites currently available. For the second group, she creates a fictitious database with much more detailed and specific customer reviews. These reviews rate companies on their service before and after an accident and go into details for specific needs such as teenage drivers. Alison may also divide that group into two subgroups based on the content (more or less favorable) of the reviews. A third group is provided with detailed information and service specs from the insurance company. And a fourth group is provided with all three information sources and can review any or all of them. Alison may also test how the information reviewed by each group affects their preferences between the company being described and other insurance companies as well as the level of recall of provided information. Alison concludes that customers certainly respond well to more granular information from other customers, and this group is most likely to adopt preferences corresponding to the provided information and remember more of what they reviewed. She remarks that it will take a while before tools that provide such data will become available.

Such research may need to be done separately for different products, in particular if there is reason to believe that consumer decision making and information value differ across products. Also, each of your customers may use a slightly different combination of sources, so when we talk about your customers’ location on the continuum we’re talking about an average of prospective purchasers. In some cases, though, you may identify distinct groups of customers that are located on different places on the continuum. For example, you may find one segment of your customers that heavily relies on review sites before purchase, while another segment that uses your website as the main source of information. We will deal with this type of segmentation in the next chapter.

Questions, questions, questions. Some marketers will continue to chase the dream of figuring out the true preference of consumers and then giving them exactly what they want. They will continue to track slight changes in brand perceptions, segment migrations, and so on. There are a couple of problems with this approach. First, consumer preferences and perceptions tend to be vague. So the idea that if you only dig deeper by asking more and more questions, you’ll learn about the consumer’s true preferences, usually leads to “findings” that are not particularly meaningful or reliable. For example, some companies practice the “laddering technique,” which promises to get at people’s core values and preferences using a sequence of pre-specified questions. This approach essentially assumes that the true values are hidden deep inside, and if we only ask patiently the right questions, we’ll get to the bottom of things. We don’t think so. There is now a vast amount of evidence showing that such techniques do not uncover any truth, but largely create answers to questions that can later be relied upon by marketers.18 And considering that the results of market research-based strategies tend to be ambiguous (because so many other factors affect actual sales), managers can almost always attribute success to their smart techniques or strategies and attribute failures to other causes.

On top of all that, relying on such “deep” research techniques is complicated by the influence of O, which makes predicting even more challenging. So it is reasonable to conclude that the use of market research techniques that rely on measures of individual consumers’ preferences to predict future marketplace decisions will decline (or be reserved to situations where it has clear value, such as finding out consumers’ reaction to yellow toothpaste). Increasingly, marketing will be about understanding what information sources consumers use, following trends, trying to offer the right products, and then following consumers’ reactions. We said it before: Marketers in O-Dependent domains should stop thinking of themselves as drivers, and embrace their role as followers.

A funny thing has happened to market research. On the one hand, researchers can use increasingly sophisticated tools. Privacy aside, they can track consumers’ every move and word on the Web and social media. There have also been developments in statistical and research techniques that a researcher might use to measure a consumer’s preferences (at the time that the measurement takes place). One might think that such timely, detailed information would allow marketers to design just the right offers that consumers have been looking for, even before they realize what they want. However, the changes in the information sources consumers use (and as a result, in the way they make decisions) make such predictions less useful than marketers and the public might think. In fact, as we pointed out, predicting what individual consumers would end up doing is becoming harder than ever. The difficulty derives from the fact that when it’s time to buy, the information that will influence the actual decision depends on what the consumer will happen to consider at that time. Stable dispositions are not as predictive as they used to be. Yet there are things that researchers will be able to do even in categories where time-of-purchase preferences are unpredictable. Recognizing the limits of such research, marketers should track, code, and quantify the content of reviews and other relevant evaluations created by O. We expect that future market research will focus more on tracking and responding to consumers’ decisions as they occur, and less on long-term preference forecasting. Instead of measuring individual consumers’ preferences, expectations, satisfaction, and loyalty, marketers should systematically track the readily available public information on review sites, user forums, and other social media.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.218.194.84