CHAPTER 9

Measuring Success With MMM, MTA, and Promotional Lift

Chapter Overview

Chief marketing officers and marketing executives must handle one tough task every year—how to best allocate their budget into a wide range of marketing activities so they can maximize returns of their limited marketing dollars.

Planning budget spending across the various marketing channels is a daunting exercise. The increasing of marketing channels and technologies adds complexities of marketing measurements. Marketers have to consider many key factors such as media campaign quality, competition, seasonality, holidays, market growth, products launch, macroeconomic factors, and so on.

Small- to medium-sized retailers have additional challenges. They do not have the luxury of hiring a consulting firm to build an expensive marketing mix modeling (MMM) for them. Therefore, they need a practical and economical solution to address their planning concerns.

This chapter first walks you through the basics of two primary marketing attribution techniques: MMM and multi-touchpoints attribution (MTA), compares the pros and cons of them, and then illustrates how to use promotional lift, the third and also the most practical solution to solving the planning challenges facing small- to medium-sized companies.

This chapter is organized as follows:

  • The First Attribution Tool: MMM
  • The Second Attribution Tool: Multichannel Attribution (MCA) and MTA
  • The Third Attribution Tool: Promotional Lift Analysis
  • Acquisition Versus Retention
  • Conclusion

The First Attribution Tool: MMM

MMM involves the use of advanced statistical techniques such as linear regression modeling, nonlinear regression modeling, influence maximization approach, agent-based approach, or empirical methods to work with top-down, macrolevel information.

MMM analyzes historical information, both internal variables such as distribution, price, TV spends, direct mail campaigns, print catalogs, outdoor campaigns, newspaper and magazine spends, consumer promotions information, digital spends [Pay Per Click (PPC), remarketing, e-mail, affiliate marketing, search engine optimization, content marketing], social marketing, website visitors, and so on and external factors like marketing and pricing data, product launch, offline and digital promotions, market elasticity, seasonality, weather, competition, and news events to quantify the sales impact of various marketing activities mathematically. MMM enables companies to understand and assess the incremental value of their investments and then forecast the impact of future sets of tactics.

Why Companies Build MMM

A successfully implemented marketing mix model can provide the following benefits:

  1. Help in distinguishing the reasons for the changes in business performance by isolating the impact of internal and external factors.
  2. Improve budget allocation, allowing companies to optimize their marketing mix by forecasting the likely impact of changes to various marketing mix variables.

MMA enables companies to address critical business questions such as:

  • What is the offline and online impact from traditional, digital, and social media on sales?
  • What is the return on investment (ROI) for each channel and campaign?
  • How to improve and optimize the effectiveness of each marketing channel by audience, campaign, geography, timing, duration, and publisher to improve ROI?
  • What would be the impact of a future change in the marketing strategy and budget?
  • How does the media show diminishing returns to investment?
  • Are there synergies and cannibalizations between media?
  • What are the appropriate cross-media, cross-channel attributions (e.g., TV driving search)?
  • How to take advantage of media synergies?
  • How do price and promotions affect sales?
  • What is the impact of internal operational factors and external factors (macroeconomic, weather, competition, etc.)?
  • How does seasonality affect sales?
  • How does marketing perform in-season versus out of season?

The Uniqueness of MMM

Compared to other attribution methods, MMM is unique in the following ways:

  1. MMM provides a top-down, high-level view of key drivers of sales, profit, and performance and enables firms to understand and assess the incremental value of their investments.
  2. MMM looks at all factors, both internal and external, that influence sales outcomes.
  3. MMM breaks down total sales revenue into two parts: the base revenue and the incremental sales revenue due to marketing.
  4. MMM utilizes historical data. It does not require a change to current initiatives. No experimental design is required.

Key Deliverables of an MMM Study

A typical MMM study will provide the following insights:

1. Contribution of Each Media

MMM breaks down total sales revenue into two parts: the base revenue and the incremental sales revenue due to marketing. Base sales revenue is what marketers get if they do not do any advertising. It is sales due to brand equity built over the years. Incremental sales are sales generated by marketing activities like TV advertisement, print advertisement, and digital spends; promotions such as direct mail and catalogs; and so on. Total incremental sales can be further split into sales from each input to calculate contribution to overall sales (Figure 9.1).

Image

Figure 9.1 Current year sales contribution—Base versus incremental


Media

Percentage

Base

60%

TV Ads

15%

Direct Mail

6%

PPC

8%

Display

5%

E-mail

2%

Catalog

3%

Radio

1%


2. ROI of Media Spending

MMM measures the ROI of each marketing channel. For instance, in Figure 9.2, for every dollar you spent on e-mail, you got $8.6 sales revenue back, but you only got $1.3 back for every dollar you spent on radio.

Media

Return

E-mail

$8.6

PPC

$4.5

Display

$3.8

Direct Mail

$3.3

TV Ads

$2.3

Catalog

$1.6

Radio

$1.3


Image

Figure 9.2 Return of media spending

3. Points of Diminishing Returns

Marketers also wanted MMM to tell them the point of diminishing returns of each channel. The law of diminishing returns states that in a production process, as one input variable increases, there will be a point at which the marginal per-unit output will start to decrease, holding all other factors constant. In other words, “the gain is not worth the pain.”

The diminishing returns help determine where your investments will get the optimal marginal returns. When a media has high ROI and low diminishing returns effect, then spend on that media should be increased. When a media has little marginal gains, you may consider reducing the spend on that media. For instance, one of my clients found that e-mail might generate the highest yield among all channels, but it was unlikely that they could significantly increase the size of its e-mail database. The e-mail has way passed the point of diminishing returns. Therefore, any additional investments in e-mail would produce a lower marginal return. Thus, the client decided not to increase investment in e-mail.

4. MMM Simulator—Learn how to execute each activity better

MMM can also enable brands to explore high-level “what if” scenarios that compare marketing tactics. MMM agencies will usually provide clients with a useful tool called an MMM simulator that allows clients to simulate different spend combinations, project sales activity during a specific period (i.e., next quarter or next year), and assess budget allocation across various channels. By shifting money from low ROI mediums to high ROI mediums, marketers can maximize sales without increasing the marketing budget in the future.

The Downside of MMM

The adoption of MMM and marketing ROI measurement has grown; however, marketing mix models have their drawbacks. In recent years, MMM has come under increasing criticism and scrutiny. MMM has been criticized for losing its relevance to answering critical business questions and has failed to keep up with the growing complexity and fragmentation of the marketplace.

As early as 2012, Forbes published an article The Downside of Marketing Mix Models Is There’s No Upside for CMOs written by David Hoo and Michael von Gonten, principal consultants at the research firm Effective Marketing Management. Hoo and Gonten claimed that these models “were fundamentally flawed in being biased to favor corrosive price promotion over brand-building advertising and to favor advertising cost efficiency over sales-growth effectiveness.” They argued that

first, mix models measured advertising and promotion effects only among total sales rather than among penetration versus repeat purchases. Second, mix models measured immediate effects within a single week with no linkage to any other weeks in order to capture the downstream consequences of the immediate effects. Mix models simply do not measure the longer-term, positive value of advertising. At the same time, the longer-term negative consequences of price promotion were not measured. As a result, mix models understated the value of advertising, and overstated the value of price promotion, producing a systematic bias that favors price promotion over advertising.1

Their remedy to solving these problems was to follow these two guiding principles: (1) penetration, not repeat, has the most profound impact on sales volume; and (2) advertising, not promotion, is the most effective, sustainable growth driver.

In August 2016, MMM expert Michael Wolfe published an article on LinkedIn called The Death of Marketing-Mix Modeling, As We Know It. Mr. Wolfe recognized five major issues with MMM:

  1. MMM focuses on the short-term effects of media and generally ignores or does not measure the long-term effects;
  2. MMM models only measure the impact of ad gross rating points (GRPs) or spend and not the ad message or creative;
  3. MMM does not account for attribution bias, particularly within digital media;
  4. MMM tends to not quantify the “synergies” across media channels, where the impact of two or more simultaneous media activations is greater than the sum of the independent parts;
  5. MMM modeling might be able to explain what is happening to brand sales, but because it excludes the “voice-of-the-customer,” it fails to explain “why; and provide insights based on the customer’s mindset and current brand experience.”2

Mr. Wolfe also provided corresponding solutions to overcoming these shortcomings:

  1. Expand its measurement focus toward quantifying the longer-term effects of marketing and develop more accurate and holistic ROI estimates;
  2. Focus more on measuring the effectiveness of ad messaging and creative to better align and develop marketing communications strategies;
  3. Adapt its method to avoid the pitfall of “last touch attribution;”
  4. Measure the interactions and synergies that exist between and across the marketing-mix, in order to form a foundation for integrated marketing;
  5. Put the “voice-of-the-customer” front and center within the models in order to more fully understand the customer’s perspective and motivations driving business performance.3

Michael’s critique of MMM quickly received criticism from Patrick Mcgrow, Sr. Vice president of client operations at Marketing Management Analytics (MMA), an IPSOS company. Patrick posted an article In Defense of Marketing Mix Modeling on his company’s blog. Patrick argued that Michael

fails to reflect current industry practice, advancements in data, or modeling approaches, and a general understanding of how this capability has evolved to address the latest business questions and marketing ecosystem. It’s misleading. And it fails, as I have so often seen these so-called critiques fail to do, offer practical business planning alternatives.4

Patrick claimed that the new generation of MMM had evolved and gained new capabilities to meet clients’ needs for increased speed, granularity, and holistic business reporting.

My Personal Experience Working With MMM Agencies

Building a marketing mix model requires external data and statistical expertise; therefore, most MMM today is done by consulting firms on behalf of their corporate clients. I am not an MMM expert, but I have many years’ experience from the client side working with MMM vendors, and I’d like to share with you my personal experience and lessons learned from working with them for more than a decade.

  1. The biggest limitation of MMM is a lack of transparency. The MMM, per se, is not a black box. The model has been proven useful by many users. The problem is that the MMM was built in a black box. Agencies treat MMM as top-secret intellectual property. All modeling vendors that I have worked with consistently rejected requests for details about their models.
  2. It is hard to validate the accuracy of MMM. Because MMM agencies treat their information as proprietary clients feel that they have no means of verifying or confirming the accuracy of the models. If you really want to verify the MMM model, and I think you should, here are a few ideas that may help:
    • Use the hold-out sampling method. Christopher Doyle suggested that you can withhold a portion of locations or time periods from the dataset you are sending to the vendor. When they have completed the modeling process, send them only the input data for the missing locations/date periods. They will estimate results and you can compare those with the real results that were withheld.5
    • Build a time series forecasting model to validate the accuracy of the MMM model. The effectiveness of a marketing mix model is largely determined by its capability of predicting the sales for the next specified timeframe. Because MMM agencies will use external data that clients have no access to, theoretically speaking, their MMM models are supposedly more accurate in forecasting future sales revenue than models built by clients themselves, assuming everything else holds equal. What we found was not always the case. Their predictions for the next quarter or the following year’s sales were no better than a regular time series forecasting model developed by our internal analytics team using only the internal data.
    • Use experimental design to validate the MMM model. The test-control design and lift analysis are still the gold standards in data science. For instance, if the lift analysis tells you that the print catalog generates $0.80 incremental sales per dollar invested, but the marketing mix model finds the return of the print catalog is $1.10, meaning only $0.10 incremental sales per dollar spent in the print catalog. The difference is so significant; it demands further investigation into the mixed models.
    • Use common sense to validate MMM. If the findings of MMM are totally different than what you have learned from your work, chances are that the model is not right. It could be either the data were not correct or the modeler just used the wrong data or some other reasons. I once noticed that in the mixed model, one of the channels had very high ROI, which didn’t make sense to me. Revisiting how the model was built, the modeler found out that he didn’t import the right dataset, a mistake that is rare but still can happen from time to time.
  3. Data is the biggest challenge of a successful marketing mix model. The success of MMM depends on the availability of relevant, accurate, consistent, and sufficient long-time data. I’d highly recommend forming a dedicated cross-department team for the MMM project to ensure the completeness and integrity of data. The MMM project team should plan in advance to allow sufficient time to collect relevant data from both internal stakeholders and external partners.
  4. The analytics team in your organization is critical to ensure the data are clean, deduped, correct, and summarized if necessary. Talk to your MMM agency to understand how they deal with incomplete and missing data. Do they delete them or use statistical techniques such as regression models or machine learning to find the best estimates and replace the missing values?
  5. Marketing mix model is expensive, and any additional requirements normally will incur additional costs.
  6. MMM is not real time. It takes months for MMM agencies to build a model and present the results to clients. Often, management receives the results after investment decisions have been made.
  7. The results derived from MMM are something you’ve probably already known. Sometimes, the MMM serves more as a political tool than an analytics function.

The Second Attribution Tool: MCA and MTA

MMM has been around for almost four decades, whereas MCA is a relatively new arrival to the scene. MCA is the effort to understand which digital marketing channel (i.e., social, display, YouTube, referral, e-mail, search, and others) contributed to a particular conversion (or multiple conversions). MCA is often being used interchangeably with MTA. However, according to some MCA experts, there are nuances between the two terms, although both are confined to digital channels. MCA focuses on assigning attribution credit by channel (social, PPC, remarketing, organic, etc.). It does not factor in specific messaging, sequence, or touchpoints. While multi-touch attribution is a more granular and more comprehensive exercise, it focuses on not only different channels, but also on the specific ads including the channels they ran on, the messages, and the sequence of interaction.

Types of MTA Models

There are many different MTA models. They fall into two categories: single-touch attribution model and multi-touch attribution model.

Single-Touch Attribution Models

The first-touch and last-touch models are two single-touch attribution models. Even though a customer may have multiple touches before reaching a conversion, these two models only consider either the first or the last touchpoint that was encountered before a conversion, rather than every touchpoint engaged with throughout the sales cycle.

  • First-Touch Attribution: This model gives full sales credit to the first marketing touchpoint interacted with before conversion.
  • Last-Touch Attribution: This model gives full sales credit to the last marketing touchpoint interacted with before conversion.

Multi-Touch Attribution Models

These models give credit to each touchpoint engaged with before conversion. The only difference between them is how much sales credit they ascribe to each touchpoint based on interaction sequence.

  • Linear Attribution: This model gives each touchpoint across the buyer journey the same amount of credit toward driving a sale.
  • U-shaped Model: This model attributes 40 percent each to the first touchpoint and leads conversion touchpoint. The other 20 percent is divided between the additional touchpoints encountered in between.
  • Time-Decay Model: This model gives more credit to the touchpoints a consumer interacts with closer to the conversion.
  • The Position-Based Attribution Model: In this model, 40 percent credit is assigned to each the first and last interaction, and the remaining 20 percent credit is distributed evenly to the middle interactions.
  • The Custom-Attribution Model: In this model, fractional credit is assigned to each touchpoint based on the company’s own rules.
  • Fractional—Algorithmic Attribution Model: Credit is assigned to multiple events along a path to conversion based on the algorithmic analysis of the relationship of events relative to all other events along the path to the conversion. The fractional credit of each touchpoint was determined via computer-based linear regression or game theory calculation.

Why Perform MCA or MTA?

There are supposedly three major benefits of performing MCA/MTA:

1. Optimize digital marketing spends

Based on the credit each marketing channel receives, and some key performance indicators such as cost per acquisition, the cost to serve, customer value, and higher quality leads of each channel, marketers can decide which channels are most beneficial for the money spent. With that insight, marketers can be smarter in distributing marketing budgets across channels for better outcomes.

2. Improve digital campaign performance

Marketers can quantify the performance of each campaign and their roles in a customer’s journey. That enables them to decide if they need to increase spend on effective campaigns or devote more funds to similar campaigns or divert funds from those that were ineffective.

3. Improve customer experience

Relevancy is key to improving customer experience. MCA/MTA provides visibility into the success of touchpoints across the customer’s entire journey, thus helping marketers understand customer behaviors, expectations, and needs along the journey. With MCA/MTA, marketers can develop strategies that are more aligned, relevant, and targeted. MCA/MTA also help shorten sales cycles by engaging consumers with fewer but more impactful marketing messages to meet users’ specific needs and desires.

Limitations of MCA/MTA

Unfortunately, the current MCA/MTA models are far from mature and are only confined to the digital channels; therefore, they are unable to deliver what they have promised fully. While MCA/MTA models can provide individual-level insights that the traditional MMM cannot, they are far from perfect. These limitations, discussed more fully below, include:

Attribution Does Not Account for Offline to Online Effects

MCA/MTA does not incorporate offline data, such as TV or print ad like direct mail and catalogs. However, as we all know, consumer interest is influenced by total marketing efforts. Offline marketing efforts can be crucial components of the consumer journey. When looking at the relative impact of digital channels but not taking offline media into account, the accuracy of MCA/MTA is questionable. As a matter of fact, for multichannel retailers, the ROI of digital marketing such as PPC and remarketing, in general, has been inflated, because they take advantage of the traffic driven by offline marketing effects. That is why when you eliminate direct mail, print catalogs, or TV ads, online sales will decline significantly. Therefore, the online-only MCA/MTA model is not a true MCA model, especially for multichannel retailers.

The Fractional Credit of Each Touchpoint Is Not the True Lift

In the digital-channel only MTA, the attributed ROI is not true incremental ROI. The MCA/MTA assigns a fractional credit to each channel or touchpoint. It can be fairly misleading if you only use MCA/MTA data to make investment decisions. For instance, assuming that the MCA model finds that content marketing gets the largest portion of the credit, does it mean you should invest most of your budget in content marketing? It depends. Marketing in the real world does not work in such a simple manner. Content marketing can have either positive or negative incremental ROI. Therefore, you need to weigh in both the fractional credit and also the incremental ROI of each channel to decide how to allocate your budget into different channels.

Is MCA/MTA Worth It?

The MTA/MCA models are still at the infant stage. They are not perfect yet. So, is it worth it?

My opinion is that it is better than no measurement at all. For instance, in affiliate marketing, while content websites may win the first clicks, the deals and coupon websites are more likely to get the final clicks and thus claim all the sales commissions based on the last click model. One of the solutions is to use the custom-attribution model to replace the last click model so that the content affiliates also get the credit and rewards they deserve. That is a good example of applying MCA/MTA to improve marketing efforts.

As to whether you should buy commercial MCA/MTA software or not, it totally depends on what goals you want to achieve, how sophisticated your products are, how many simultaneous campaigns you will run, and how deep your pocket is. At this point, implementing an advanced multi-touch attribution commercial model requires both domain expertise and money. The benefits you will receive usually are not enough to justify the cost of using the software. Many vendors have fully realized the shortcomings of these digital-channel only MCA/MTA models, and are developing new approaches that will encompass all channels both online and offline to tell a more accurate story about a customer’s journey. But before these software tools can really deliver what they have promised and also are economically sound, free tools such as Google Analytics remain your best friend.

The Third Attribution Tool: Promotional
Lift Analysis

MMM is normally too expensive for smaller organizations to build. MCA/MTA models are necessary but incomplete on their own. Luckily, marketers have a third option at their disposal to measure the effectiveness of marketing—the promotional lift analysis.

Lift is one of the most frequently used terms in the world of data-driven marketing. Statisticians and modelers use lift to gauge the effectiveness of predictive response models, and direct mail marketers use lift to measure the performance of direct mail campaigns. However, although both parties use the same word “lift,” their definitions are quite different.

For a statistician or a modeler, the lift is a measure of the effectiveness of a predictive model calculated as the ratio between the results obtained from the model and the results without using the model.

Say you have 100,000 names in your database, you mail to everybody, the response rate is 2 percent. If you randomly mail to 30 percent of your customers, you will get 600 responders (100,000 × 30 percent × 2 percent) based on the 2 percent overall response.

Now, if you used a predictive response model to select the top 30 percent customers, out of these 30,000 mailed customers, 1,500 responded. The response rate is 5 percent (1,500/30,000).

Therefore, the lift generated by using the model as opposed to not using the model is an astonishing 150%! ((5% model − 2% random)/2% random = 1.50 = 150%). So, in essence, the lift is calculated by comparing the response rate of the selected segments to the average response of the entire population. It evaluates how effective a model can be in selecting customers who are most likely to respond to your communications.

Lift is a great metric to measure the effectiveness of your response model. However, the problem is that some companies misuse this metric as the results of direct mail campaigns, which is inappropriate. Doing so will inflate the performance of direct mail campaigns. A better way is to use the promotional lift instead to measure the incremental ROI of marketing campaigns.

What Is the Promotional Lift?

Marketers measure the lift of marketing campaigns by comparing the results between the like customers with a special emphasis on incremental sales and margin. To do so, marketers create mail groups and control groups. The customers in the mail groups and the control groups are virtually the same except the mail group will receive the treatment, and the control will not. Therefore, assuming everything else holds equal, the promotional lift is the incremental sales or incremental margin that can be credited to the marketing efforts (i.e., TV ads, direct mail, catalog, PPC, e-mail, etc.). Obviously, this is a more stringent method that reflects the true lift of marketing campaigns.

How to Use the Promotional Lift to Measure Marketing Efforts

The promotional lift analysis is a cost-effective yet very powerful tool that every marketer should take advantage of. It addresses the ultimate mission of marketing—generating incremental value. Promotional lift is not only complementary to MMM and MCA but can also verify their accuracy.

Promotional lift analysis techniques can be widely applied to measure the results of TV advertising, direct mail, catalog, e-mail campaigns, the effect of online advertising on offline, and offline advertising on online.

How to Measure Lift of Direct Mail and Catalog

To calculate the promotional lift for direct mail or catalog campaigns, you will need to know six numbers: the total mailing cost, the number of mail customers, the average margin/customer of the mail, the average response rate of the mail customers, the average margin/customer of the control, and the average response rate of the control group.

Below is the formula that calculates the lift of direct mail campaigns:

Promotional Lift = Total Incremental Margin −Total Cost (including creatives, prints, postages, etc.)

Where Total Incremental Margin = Number of Mail Customers × (Average Response of Mail × Average Margin of Mail − Average Response of Control × Average Margin of Control), and Total Cost = Number of Mail Customers × Cost per Customer.

This formula can be rewritten as

Promotional Lift = Number of Mail Customers × (Average Response of Mail × Average Margin of Mail − Average Response of Control × Average Margin of Control − Average Cost per Customer).

I usually create two lift reports: the detailed campaign lift report that has information about every individual direct mail campaign at segment level and a header campaign lift report that shows only the total lift of all completed campaigns.

How to Measure Lift Generated by TV Ads

Nineteenth century Philadelphia retailer John Wanamaker supposedly said, “Half the money I spend on advertising is wasted; the trouble is I don’t know which half.” No marketing vehicle is more difficult to measure than TV ads.

Traditionally, TV effectiveness was measured by GRP, a standard measure in TV advertising. GRP is calculated as a percent of the target market reached multiplied by the exposure frequency. Thus, if you get to advertise to 40 percent of the target market and give them four exposures, you would have 160 GRP. MMM is another tool to measure TV results, which we’ve discussed earlier in this chapter.

In addition to MMM and GRP, marketers also use geographical A/B split testing to measure the lift of TV ads. Basically, you need to find two similar regions in terms of demographics. Set one region as a TV-supported region and the other region as a non-TV region (the control region). Run activity and then measure changes in store footprints, sales per capita, or per store, direct response to unique phone numbers, URL or e-mail in the TV ads, web searches, website visits, and online sales in each of the two regions. Finally, calculate the incremental sales uplift in the TV-supported region versus the non-TV region.

How to Measure Cross-Channel Attribution

Cross-channel attribution refers to the process of determining the effects of online ads to offline and offline ads to online. So how do you measure the cross-channel attribution?

Again, you can employ the geographical A/B split testing methodology by finding two retail stores in different regions that are similar in terms of demographics and revenue. Set one region as a testing region and the other region as a control region. Run online ads (i.e., PPC and remarketing, for instance) and then measure changes of metrics such as store footprints, sales per capita, or per store, direct response to store phone numbers, coupon codes in the online ads, and so on, thus identifying the incremental sales uplift of the store in the testing region versus the store in the control region.

By applying the same methodology, marketers can measure the impact of offline ads to offline sales as well. For instance, if you run a direct mail campaign, you can calculate the lift by comparing the online sales of the mail group versus the online sales of the control group.

Acquisition Versus Retention

When deciding budget allocation, marketers are faced with one of the major challenges—how to determine the right split between acquisition and retention.

Marketers have long known that acquisition is more expensive than retention. In 2014, Harvard Business Review published an article written by Amy Gallo stating that

Depending on which study you believe, and what industry you’re in, acquiring a new customer is anywhere from 5 to 25 times more expensive than retaining an existing one. It makes sense: you don’t have to spend time and resources going out and finding a new client—you just have to keep the one you have happy.6

Since then, many marketers believe that “acquiring a new customer is five times more expensive than retaining a customer.” I am not saying that the magic number is wrong. What I want to remind fellow marketers is to please use that number in the context of real business situations. The truth is that not all the existing customers are worth retaining, and not every existing customer to be retained will get you that kind of high ROI.

In reality, the balance between customer acquisition and retention is never static. While different industries and different companies have their own ways of budget planning, I’d like to contribute three tips here:

First, when determining the split between acquisition and retention, the best approach is to use the managerial segmentation (Chapter 2) and existing customer lifetime value (Chapter 8) to identify customers who are worth retaining. This first step will help you decide how much to spend on retention and what ROI you can expect. After that, use insights derived from MMM, MCA, and promotional lift analysis to distribute your retention budget accordingly into different marketing media.

Second, the cost of customer acquisition is not necessarily always five times more than retention. The key to improving the ROI and the effectiveness of acquisition is to leverage customer segmentation, best customer profiles, and customer lifetime value to identify and acquire those alike prospects that will bring high lifetime value to your company.

Third, both acquisition and retention are essential for the long-term success of your company. Customer retention has an immediate impact on the bottom line, but the acquisition is the lifeblood that makes your business sustainable. Without successful customer acquisition, your customer base will shrink, which will ultimately fail your retention strategies. In many highly competitive industries, sometimes, survival and gaining market share are more important than the bottom line. That probably explains why most marketing executives like to allocate more money into acquisition than retention. Therefore, knowing that acquisition is 5 to 25 times more expensive than retention is important. Knowing how to use that number in the context of the ever-changing competitive landscape and make decisions to serve a bigger goal even at the expense of sacrificing temporary profit margin is more important. That is why marketing is a game of both science and art.

Conclusion

MMM provides great macrolevel insights into the ROI of marketing activities and has been a great tool for optimizing marketing efforts but has many limitations. MCA/MTA is capable of providing granular insights that modern marketers rely on but so far are limited within the digital space only. Promotional lift analysis is a simple yet very versatile and powerful method for marketing ROI diagnostics, an excellent complementary tool to both MMM and MCA/MTA.

All three attribution techniques are necessary but incomplete on their own; none should be treated as “the answer.” Therefore, marketers must employ a unified marketing measurement that combines insights derived from a variety of techniques to gain a comprehensive view of the effectiveness and ROI of marketing initiatives.

The budgeting process will always involve a bit of uncertainty and educated guesses. For instance, one typical drawback of all three of these techniques is that they all seem to favor short-term effects and cannot accurately assess the long-term impact of advertising on brand building. Therefore, marketing executives must resist the temptation to overspend on promotional activities and should allocate enough money on brand building and new customer acquisition, a decision that is certainly not easy, but absolutely necessary and will ultimately benefit the long-term health and growth of the organization.


1 Hoo, D., and M. von Gonten. 2012. “The Downside Of Marketing Mix Models Is There’s No Upside For CMOs.” Forbes.com, November 28, https://forbes.com/sites/onmarketing/2012/11/28/the-downside-of-marketing-mix-models-is-theres-no-upside-for-cmos/#bacd5e66805e

2 Wolfe, M.S. 2016. “The Death of Marketing-Mix Modeling, As We Know It.” Greenbook Blog, September 12, https://greenbookblog.org/2016/09/12/the-death-of-marketing-mix-modeling-as-we-know-it/

3 Wolfe, M.S. 2016. “The Death of Marketing-Mix Modeling, As We Know It.” Greenbook Blog, September 12, https://greenbookblog.org/2016/09/12/the-death-of-marketing-mix-modeling-as-we-know-it/

4 McGraw, P.F. 2016. “In Defense of Marketing Mix Modeling.” MMA.com, September 14 https://mma.com/blog/defense-marketing-mix-modeling/

5 Doyle, C. 2016. “Top 10 Challenges for Implementing Marketing Mix Models.” September 12 https://linkedin.com/pulse/top-10-challenges-implementing-marketing-mix-models-christopher-doyle/

6 Gallo, A. 2014. “The Value of Keeping the Right Customers.” Harvard Business Review, October 29 https://hbr.org/2014/10/the-value-of-keeping-the-right-customers

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.216.239.46