Chapter 9

Measuring and Optimizing Marketing Spend

“Half the money I spend on advertising is wasted. The trouble is, I don’t know which half.” This quote, attributed to John Wanamaker, a department-store magnate in the nineteenth century, highlights a constant challenge for marketing executives. Two new developments—digital advertising, with its ability to track every click, and big data for analyzing and finding patterns—were hailed as potential solutions to this decades-old problem. However, these developments have come with their own challenges. Digital advertising has ushered in an era of new marketing metrics, such as video views, Facebook likes, and click-through rates, even though their link to actual sales and profitability often remains fuzzy. And although big data has allowed executives to easily find patterns and correlations, many of those patterns and correlations are spurious and misleading. In this chapter, we will discuss some of the key challenges in measuring and optimizing one’s marketing spend, and the latest research that aims to solve those challenges.

Correlation versus Causality

In 2008, Chris Anderson, the editor of Wired magazine, wrote a provocative article titled, “The End of Theory: The Data Deluge Makes the Scientific Method Obsolete,” in which he wrote:

Scientists are trained to recognize that correlation is not causation . . . But faced with massive data, this approach to science—hypothesize, model, test—is becoming obsolete . . . There is now a better way. Petabytes allow us to say: “Correlation is enough.” We can stop looking for models . . . Correlation supersedes causation . . . There’s no reason to cling to our old ways. It’s time to ask: What can science learn from Google?1

The same year Google researchers published an article in the journal Nature about Google Flu Trends—a model that used hundreds of billions of US consumer searches on Google about influenza to accurately predict the incidence of the flu.2 Suddenly it seemed that in the era of big data, correlation might really be enough. Why bother understanding how advertising may influence consumers if we can find a strong positive correlation between advertising and sales? With vast amounts of information available in the digital era, we can let the data “speak” for itself.

The problem with this approach is that we often find spurious and misleading patterns in large data. To see this, try correlating two random variables in Google Correlate, a free service provided by Google. There is, for example, an incredibly high correlation between US web searches for losing weight and for townhouses to rent, even though it is hard to believe that the searches are somehow related.

While it is easy to believe that correlation in this example is spurious, in many other instances such misleading results may seem convincing and may prompt wrong decisions. Even Google’s study came under criticism in a 2014 article in the journal Science, which found that since August 2011, Google Flu Trends overestimated the flu rate for 100 of the 108 weeks of the study, sometimes by more than 100 percent.3 Assuming without question that correlation bespeaks causality has also led to many incorrect conclusions in measuring marketing effectiveness, as we highlight next.

What Is a Facebook “Like” Worth?

A few years ago, my Harvard Business School colleague John Deighton and I invited a senior digital marketing executive from Coca-Cola to be a guest speaker for a digital marketing course that we were teaching to our MBA students. In his remarks to the class the Coke executive proudly said that Coke had 40 million Facebook fans (today this number is over 105 million)—a key metric that Facebook was promoting. Soon the class started debating about the value of a Facebook “like.” Some students argued that the mere fact that 40 million consumers raised their hands to publicly declare their affinity for Coke ought to be highly valuable. Others wondered if Coke “bought” these fans by offering them discounts and free gifts.

Around that time, many research companies were trying to quantify the value of a Facebook fan. A 2011 study by comScore proclaimed that Starbucks fans, and friends of fans, spent 8 percent more and transacted 11 percent more frequently than the average internet user who transacted at Starbucks.4 A couple of years later, Syncapse, a company that specializes in “social intelligence,” created an even bigger splash by declaring that, on average, the value of a Facebook fan was roughly $174. For Coke specifically, a fan was worth a little over $70 (see figure 9-1).

FIGURE 9-1

Value of a Facebook fan

Images

Source: Todd Wasserman, “A Facebook Fan Is Worth $174, Researcher Says,” Mashable, April 17, 2013.

These provocative studies piqued my curiosity, and I wanted to understand how the researchers arrived at these incredible numbers. In addition to brand affinity and media value, a key component of fans’ value in these studies came from their increased product spending. To measure it, researchers used a panel of consumers and compared the annual spending on a variety of brands by Facebook fans and by nonfans of those brands. Using this approach Syncapse found, for instance, that Coke fans spent $70 more per year on Coke products than nonfans did, which led Syncapse to conclude that the value of a Coke Facebook fan was $70.

But this approach raises a fundamental question: Did “liking” Coke on Facebook encourage users to spend more on Coke, or were loyal and heavy users of Coke more inclined to “like” Coke on Facebook in the first place? This distinction is critical, since these studies were effectively suggesting that Facebook “likes” build loyalty and encourage consumers to spend more on their brands. However, if self-selection was at work, and loyal, heavy users were more likely to become Facebook fans, then using Facebook “likes” as a key metric of success—or, worse, spending marketing dollars to obtain them—would be highly unjustified.

It is hard to control for self-selection when parsing Facebook data, so my colleagues and I decided to undertake a research project where in a series of lab and field studies we randomly assigned consumers to fan and nonfan groups. In one of our experiments we invited consumers in the treatment, or fan, group to like a new brand of cosmetic on Facebook (most accepted the invitation), while the people in the control, or nonfan, group did not receive this invitation. All participants were then given a coupon for a free sample, and we tracked coupon redemption for both groups. In a second set of experiments we tested whether liking a page influences the behavior of online friends. Across five experiments and two meta-analyses involving over 14,000 consumers, we found that a Facebook “like” has no impact on the attitudes and buying habits of either consumers or their online friends. In other words, the mere act of “liking” a brand on Facebook had no value in our study.5

In recent years, Facebook has moved away from touting the value of fans and has instead focused more on demonstrating the actual lift in sales due to advertising on its newsfeed. Yet number of “likes” for a brand continue to be a focus for many marketing executives.

Social Contagion

Social networks, such as Facebook, have the potential to influence friends. In a provocative study, Nicholas Christakis, of Harvard Medical School, and his colleague James Fowler claimed that obesity spreads in social circles like an epidemic.6 The Washington Post reported the findings of this study as follows:

The study, involving more than 12,000 people tracked over 32 years, found that social networks play a surprisingly powerful role in determining an individual’s chances of gaining weight . . . when one spouse became obese, the other was 37 percent more likely to do so in the next two to four years, compared with other couples. If a man became obese, his brother’s risk rose by 40 percent.7

Soon this study came under a lot of criticism from the scientific community. Using survey data of high school teens and the same approach as used by Christakis and Fowler, one study showed that height, acne, and headaches were also contagious—a highly implausible result according to the authors of this study.8 Russell Lyons, a mathematician from Indiana University, published a highly critical paper challenging the obesity findings due to “deeply flawed” methods of analysis.9

A major critique of studies that attempt to measure the impact of social influence is the confounding effect of what is called “homophily”—the phenomenon that “birds of a feather flock together.”10 Effectively, homophily states that two people, say persons A and B, are likely to be friends if they have similar interests. So, if person A buys a song on iTunes and later person B buys that same song, is it proof of the social influence of A on B or is it attributable to the fact that persons A and B have common interests in music and that this partly informs their friendship in the first place? Again, is this effect causal or purely correlational?

To separate social influence from homophily, Sinan Aral, a social network scholar, and his colleagues used data from 27.4 million users on a global instant-messaging network and examined their adoption of a mobile service application. They found that homophily explained over 50 percent of the perceived contagion and that previous methods had overestimated peer influence in product-adoption decisions by anywhere from 300 percent to 700 percent.11 This finding is consistent with that of another study, one that examined technology adoption among employees in a firm and found that not controlling for homophily could lead to overestimation of peer effects by 50 percent.12 It is difficult, though possible, to partially control for homophily from observed data.13 However, the best way to identify true social-influence effects is through experiments.14

Value of a Click

Although it is hard to measure the impact of social influence, measuring the effectiveness of a search ad on Google is considered to be easy and straightforward. You pay based on cost per click (CPC), and knowing the conversion rate from clicks to purchase should give you an estimate of ROI for your search ads. Google provides such analytics to help its clients measure the effectiveness of online ads.

While this may seem deceptively simple, search ads might in fact be dramatically less effective than they appear to be, if—as is certainly possible—some of the users who clicked a search ad would have clicked the organic link of your website anyway. Perhaps one of the most provocative studies to challenge the effectiveness of search ads was done by eBay, which had been buying search ads for over 100 million keywords. Researchers at eBay believed that users who type branded keywords (search terms that contained the name eBay, such as “eBay shoes”) were using them with the intent to visit eBay’s website. In other words, these users would visit eBay with or without the search ads. In March 2012, eBay decided to test this hypothesis. It stopped advertising for all branded keywords and monitored its traffic in this carefully controlled experiment. It did the same for nonbranded keywords—search terms that did not contain the name “eBay.” The study found that branded-keyword ads had no measurable short-term benefit since most of the users who clicked these search ads were frequent visitors to the eBay site anyway. For nonbranded keywords, new and infrequent users were positively influenced by search ads, but frequent users were not affected. The study concluded that since “frequent users whose purchase behavior is not influenced by ads account for most of the advertising expenses, [it results] in average returns that are negative.”15

In response to this study, a Google spokesperson noted:

Google’s own studies, based on results from hundreds of advertisers, have found that more than 89% of search ad clicks were incremental and that 50% of the search ad clicks were incremental even when there was an organic search result for the advertiser in the top position. Since outcomes differ so much among advertisers and are influenced by many different factors, we encourage advertisers to experiment with their own campaigns.16

More recently, one of my colleagues, Michael Luca, conducted a similar test of search ads on Yelp. Using a randomized sample of over 18,000 restaurants, Luca and his coauthor selected 7,210 restaurants that had never advertised on Yelp. For the next three months, they ran free ads for these restaurants (without informing them, to avoid any change in their behavior), and then took the ads down to compare the difference in the restaurants’ traffic with and without the ads. This study found that Yelp ads, in fact, led to a significant increase in restaurant page views, requests for directions, and calls.17

Why were search ads ineffective for eBay but effective for Yelp restaurants? It seems that branded keywords are ineffective for well-known brands such as eBay or Amazon, but may have a positive impact for lesser-known brands and restaurants. Yet a large proportion of search-ad money is spent on buying the keyword with the company’s brand name—type “Hilton hotel” or “Amazon” in Google and you will see an ad for Hilton or Amazon just above the organic link for these companies.

Attribution

A related problem when trying to measure the effectiveness of search ads is attribution, or figuring out who gets the credit for the click or sale. Search is considered a bottom-of-the-funnel activity. In other words, it corresponds to when a consumer is actively looking to buy a product. It is quite possible, however, that in the earlier stages of a consumer’s decision journey, she was influenced by the TV, radio, or display ads of a brand, and that this increased the likelihood of her clicking on a search ad at a later point in time. Marketing executives and advertising experts are quite familiar with this problem, although their approaches to solve it are usually less than ideal.

Google provides an overview of various attribution models used in the industry (see figure 9-2). The first five approaches are commonly used, but they are ad hoc. For example, the “last interaction” method gives 100 percent of the credit to the last touchpoint, which usually makes Google search appear more effective than it actually is. The “time decay” approach gives more weight to the later touchpoints and less weight to earlier interactions, though the choice of weights is arbitrary, which can significantly influence both the results and consequent budget allocations. The last two approaches, “model-based” and “experiment-based,” are more rigorous. Model-based methods use ad-exposure and consumer-response data to deduce the effect of each ad along the consumer journey. Experiments, often considered the gold standard, show ads of a target brand in the test group but not in the control group. The difference in the response or conversion for the two groups can then be attributed to the ads.

FIGURE 9-2

Attribution models

Images

Source: Sunil Gupta and Joseph Davin, “Digital Marketing,” Core Curriculum: Readings in Marketing, Harvard Business Publishing, and adapted from Google Analytics Help, “Attribution Modeling Overview,” https://support.google.com/analytics/answer/1662518?hl=en.

Proper attribution is critical for optimal budget allocation. In 2010, BBVA Compass bank and its advertising agency faced this problem when deciding how to allocate the bank’s online budget across several search engines and display-ad networks for acquiring customers. After monitoring the click-through and conversion rates of various channels, BBVA decided to spend about 45 percent of the budget on search and 55 percent on display ads. Past data showed that cost per acquisition for search was $73, whereas it was $88, or 20 percent higher, for display ads. Why spend more on display when it was 20 percent more costly than search? When I posed this question to Sharon Bernstein, the director of insights for BBVA’s ad agency, she shared with me the results of an experiment. During January and February 2010, the ad agency randomly divided a subset of users into two groups. Both groups continued to see the search ads, but for one group the agency stopped display ads. It then compared this group’s conversion rate through search ads (from search clicks to completing an application for a bank account) with that of the other group, which was also exposed to display ads. The results showed that those who were not exposed to display ads had a conversion rate of 1.26 percent and those who saw the ads had a conversion rate of 1.48 percent. Based on these results, the agency concluded that display ads were responsible for a 20 percent higher conversion rate than search ads alone and that therefore a 20 percent higher cost of acquisition for them was justified.18 Recently several studies have begun to address the attribution problem in a more rigorous and sophisticated fashion than the mostly simple and incomplete approaches shown in figure 9-2.19

Dynamics

If you see a search or display ad, you may not click on it at that very moment but it may still influence your behavior at a later point in time. This is true not only for brand-building ads that you see on television but also for digital ads designed to elicit immediate response. Ignoring this fact would underestimate the effect of ads and would lead to underallocation of the advertising budget. This effect is especially significant for products such as automobiles, which consumers consider over weeks or months before buying. In its study of the “zero moment of truth” (see chapter 7), using consumers’ search data, Google created heat maps to visualize how long before the actual purchase consumers engaged in a search for various products. Figure 9-3 shows the “heat map” for automobile purchases, which highlights that the most intense search for cars occurs about one or two months before the actual purchase.20

FIGURE 9-3

Intensity of consumer search for automobiles

Images

Source: Jim Lecinski, Winning the Zero Moment of Truth (Palo Alto: Think with Google, 2011), 25.

Consumers’ search behavior over time made me reflect on my discussion with BBVA bank and its ad agency. In its experiment to identify the attribution effect of display ads, the agency tracked the impact of these ads for two weeks (an arbitrary choice) after consumers were exposed to the ads. But what if the effect of the ads lasted for more than two weeks? To investigate this, my coauthors Pavel Kireyev and Koen Pauwels and I obtained data from the company and built time-series models to isolate both the short-term and the long-term effects of search and display ads on the completion of new applications. Consistent with the company’s experiment, we found that conversion rate of search ads was higher when display ads preceded them, but surprisingly we also found that search ads had a significant long-term impact beyond two weeks. Taking into account the long-term effects of search ads, we concluded that the company should increase its search-ad budget by 36 percent, even after accounting for the attribution effect of display ads.21

Online–Offline Interaction

Even though some of the largest advertisers, such as General Motors and Unilever, now spend a large portion of their advertising budget online, the majority of their sales still happen in offline channels. In 2016, digital had a 38 percent share of total US ad spending, and this was expected to rise to over 50 percent by 2020.22 However, by the first quarter of 2017, e-commerce accounted for only 8.5 percent of total US retail sales.23 Clearly marketing executives believe that online advertising drives offline sales. While it is relatively easy to track a consumer who is exposed to an online ad and then buys online, connecting the dots between online ads and offline sales has not been easy until recently.

Tracking the link between online ads and offline sales is possible through field experiments. Even Facebook, which in the past relied on its own metrics, such as the number of “likes” or fans, is shifting to this approach to show sales lift from its ads. Facebook has introduced a new platform, Lift, which randomly splits the target audience on Facebook into two groups—one group sees the ads in its newsfeed and the other does not. By comparing the conversion rate, Facebook is able to measure the effect of its online ads on offline sales. Using this approach, Facebook showed that offline sales of data plans for GM’s OnStar system increased by 2.3 percent because of ads on Facebook newsfeeds.24

While clients may not be comfortable trusting Facebook to prove the effectiveness of its own ads, several academic studies have also used field experiments to show strong cross-channel effects of online ads. Using data from a Dutch company that sells office furniture to businesses, one study found that 73 percent of the profit impact of Google’s AdWords was from offline sales, and 20 percent of the profit impact of direct mail was from online sales.25 Another study for a major US clothing retailer found that over 80 percent of the ROI from online ads came from offline sales.26 Ignoring these cross-channel effects would lead to suboptimal budget allocation.

Not only do online ads influence offline sales, but there is also strong synergy between online and offline advertising itself. For example, a television ad may amplify its message through Twitter or Facebook. Using data from a major German car company and taking into account these cross-media synergies, one study concluded that the optimal advertising budget for online media should be double that of the company’s current allocation.27

In conclusion, the ability to conduct field experiments quickly and cheaply and the possibility of building rigorous models using large amounts of advertising and purchase data are enabling firms to better measure and optimize their marketing budgets. However, managers must be vigilant against the false metrics and spurious analyses that still seem to permeate the industry.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.226.104.153