Appendix
Views on the World of Shoppers, Retailers, and Brands

A pioneer of in-store research, the late Bob Stevens of Procter & Gamble offered a newsletter that he called “Views from the Hills of Kentucky,” where he provided perspectives on shopping. Inspired by Bob, I’ve recently started my own online column in the spirit of his messages, which I’ve called “Views” as a tribute to his earlier work. In this Appendix, I’ve selected two excerpts from his columns that remain highly relevant to understanding shopping to give you a taste of his work. I encourage you to visit our archive of his wonderful columns at http://www.tns-sorensen.com/views/archive/views/. To see my latest “Views” entries on our ongoing studies of in-store retailing, please see http://www.insidethemindoftheshopper.com/.

Excerpts from “Views from the Hills of Kentucky” by Robert Stevens

Testers Versus Users

When asked to test something, do you

• Look at and use it differently than when you just happen to be using the same item?

• See things that you would not normally see in the course of using the same product?

• Look more closely at the physical characteristics of the product?

• Look more closely at the packaging?

• Think performance features take on different meanings?

If you answered “Yes” to most, if not all, of the preceding, you are a typical user and tester. Research has found that when you ask a person to test something for you, they place it under the microscope. They see things that, in the course of normal usage, they would never see or even consider.

If the preceding is true, how is it that almost all research is conducted in the test environment? It would seem to me that we would have some interest in the user environment, especially if there is a substantial difference in the assessment under the two perspectives. We don’t, after all, sell to the world’s testers but to users. It is they who dictate a brand’s success or failure.

Actually, I like using both the tester and the user environment when assessing a brand’s potential. I generally prefer to use testers in the upstream research and, as I get closer to market, I use the user perspective.

I have found that very few companies use the latter when assessing a brand’s potential. Why? I think that few companies realize that two perspectives exist. Among those who do, many don’t use the users because few field services offer both options, and it is perceived to be difficult and expensive. I’ve never found that to be so, but it does, however, take organization and skill to execute properly.

I wonder how many really good ideas are killed in the testing phase because they are being scrutinized so closely, whereas, if the problem appeared in the market, it would never be considered or even seen.

I’m reminded of an experiment in researching the effects of a test protocol in the late 1960s. We were about to conduct a CLT recall of a laundry detergent test among 360 female heads-of-household.

We also had a hand dishwashing detergent study cancel. From the cancelled study, we had 240 blind samples of a current market product. We divided the returning laundry detergent users into two panels, odd- and even-numbered.

After the laundry detergent interview was completed, we asked the even-numbered panelists (120 of them) if they would like to participate in another test. Those who said “Yes” were given a bottle of the dishwashing detergent and were told we would call them in two weeks to conduct the interview.

For the odd-numbered panelists, we told them we had some leftover dishwashing detergent and did not want to send it back to the plant. If they wanted a bottle, they could have one.

After two weeks, both panels were called and interviewed. The results of the study showed dramatic differences in the responses between the odd-and even-numbered panels. Those who were asked to “test” the dishwashing detergent responded in much greater detail than those who were “given” a leftover sample.

Is there a right and wrong protocol? No. I believe there is a time and a place for both types of research. Both approaches bring valuable data to the table. It is important to know when to use each approach. I also expect that the difference between the two panels will be a function of the test product’s quality, where excellent and poor products will show bigger differences between panels, while average products will result in smaller differences.

Assessment in Context

Here, I’ll outline the results of four in-store packaging studies. The results of three of the studies indicated that the projects should move forward into the market, whereas conventional studies indicated that the projects should not move forward. The results of the fourth study indicated that the project should not move forward, whereas the conventional testing said the project should go forward. In all four cases, the management of the sponsoring companies followed the guidance from the in-store research.

Case Study #1: Package Outage Problem

A new process for making a product was about to be introduced into the market. The warehouse physical properties measurements of the product uncovered an unusual amount of outage (empty space above the product in the container). Quickly, consumer tests were conducted. The results concluded that in no way should this product go to the market. An in-store consumer test of the product was conducted, with the results indicating that the product was highly acceptable: Only one of the 700 consumers interviewed commented about the outage. The market introduction went forward as planned. The test market was a success.

Case Study #2: Package Design Research

A manufacturer wanted to improve the image of its liquid product. To do this, they were about to change both the bottle and the label. They were not changing the product. Conventional, mall intercept, consumer tests were conducted. The results indicated that the changes should not be made. When, however, the new bottle/label product was taken and placed on store shelves and the exact same interview used in the mall study was conducted, the results of the in-store interviews were dramatically different from the mall test. The package changes were well received in-store, and management went forward with the changes. The introduction of the new bottle/label was considered a success.

Case Study #3: Capital-Intensive Product Form Change

A radical form change was being considered for a cleaning product, and a conventional simulated test market (STM) was conducted. The results were neither encouraging nor discouraging. There was, however, a major capital expense involved in moving forward with this initiative. With these results, management could not justify the expense involved with the change. An in-store test was conducted, with the results being dramatically favorable. The project went forward, with product change setting a new standard for the category. Five years later, all major category participants had modified their brands to duplicate the change.

Case Study #4: Package Design Research

A major detergent manufacturer was about to make a major change in the bottle and label of a cleaning product. Conventional test methods encouraged the change. However, one skeptic in the company was holding out on the change, which was a dramatic departure from their current bottle and label. An in-store “shelf appearance test” was requested, using the very same interview used in the conventional testing. The results of the in-store study proved disastrous for the new bottle and label combination. Even before the results were tabulated, however, the initiative was canceled. The marketing research director was present at the testing and heard both the reactions of the shoppers and saw the shelf appearance of the brand. In the conventional testing, the brand was displayed with light on all sides of the bottle, giving it a “halo” appearance. On the store shelf, however, there was no backlighting. The result was a very poor appearance. As one respondent put it, “It looked like dirty motor oil.”

In consumer research, it pays to consider the possible physical and psychological biases involved in your test designs. My experience is that “Assessment in Context” leads to more successes and less financial risk.

Can you imagine trying to assess pricing structures of products sitting on a table in the back room of a mall? How typical is that of the natural environment? Maybe it is typical of research, but not of the consumer’s natural environment of product prices. How about testing the appearance of a container sitting on a table and not on a store shelf? It’s like testing a car’s driving comfort while sitting in it on the showroom floor.

For years, Sorensen Associates has been using real stores to test consumer products. Actually, 90+% of their studies are conducted in the retail environment. That’s why they are called the “In-Store Research Company.” While at Procter & Gamble, I was heavily involved in using the consumer’s home and the store environment as my laboratory. P&G was using homes before I ever came on board, and that was in 1951. I believe Dr. Smelser, the creator of the Market Research Department in 1923, used the consumer’s home as the base of all his research. In the 1970s, we started using real stores as focal points for assessing brand images, brand choices, package design, pricing, purchase motivation, brand rejection, and so on.

It’s called Assessment in Context. I think it is all about reliability and validity of the research.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.119.138.202