Chapter 19
In This Chapter
Learning about A/B testing and why it’s important
Understanding what multivariate testing is and how to use it
Conducting your first test
Knowing what to test
All lead-generation campaigns need to be tested. Why? You need to test in order to make your campaigns better over time! Personally, I find testing extremely interesting. Without it, I wouldn’t have insight into my programs and I certainly wouldn’t know what was working and what wasn’t.
A marketing team that doesn’t test often is a marketing team that is blind to what their leads are doing. So what do you do? Build testing into your campaign creation and measurement process, and make sure testing is ingrained in each and every team member’s minds. Each of your marketers should test his campaigns on a regular basis, or you can even have someone on staff who specializes in (or who highly enjoys) testing.
Testing in lead generation and marketing is akin to the scientific method: communicate a question, hypothesize the answer, formulate your predictions, test those predictions, and analyze the results.
Marketers typically use a few standard testing types to test their campaigns. Note that you can pretty much test any aspect of a campaign from channel used, to copy created, to subject line, time of day sent, and more. So get creative with what you are trying to find out. The more you know, the more you grow!
Depending on what you want to test, you can use your marketing automation platform, a solution that specializes in testing like Optimizely, or you can even use Google Analytics to track changes you have made. In some cases, you may not even need additional testing help. For instance, if you post two messages on a social channel, you can track yourself which post has more shares and engagement.
For more information on what product to use for what test, check out this chart that Conversion Rate Experts released on what solution to use for what test: www.conversion-rate-experts.com/split-testing-software. This covers a variety of testing options.
In the next few sections, I dig into some common testing types to set up the framework for success.
A/B tests are probably the most common type of test that a marketer runs. These are also called split tests, and they compare the conversion rates of two assets, such as an email or landing page, by changing one element at a time. You can also compare more than two assets by running an A/B/C test or an A/B/C/D test. However, when starting out, I recommend that you begin with a simple A/B test.
The key here is that you are changing only one variable at a time so you can pinpoint any change in conversion and attribute it to that changed variable. A/B tests are fantastic for testing things like CTA (call-to-action) buttons, copy, headlines, graphics, form length, and email time sent. You can even use A/B testing for social messaging, content format, or webinar frequency.
When using A/B testing, all you do is split your email send, PPC landing page traffic, or post your social messaging at two different times. You should see one asset performing better than the other.
Figure 19-1 illustrates the idea behind an A/B test from online testing vendor Maxymiser.
A/B testing is perfect for testing small numbers of variables at a given time, is easy to grasp, and the data can be seen quickly and clearly. The disadvantage of an A/B test is that it is simplistic, so it can’t handle multiple variable changes at a time.
Enter multivariate testing, A/B testing’s big (and more complicated) sister. A multivariate test can compare a much higher number of variables at one time. Generally, multivariate tests can also show more complex information, therefore telling you more about your campaign performance and testing results. Typically, when embarking on a multivariate test, you need a software solution like Monetate or Optimizely. What do you use multivariate testing for? Well, you can test a combination of design changes, CTA locations, copy choices, and more. However, this gets tricky because you have to ensure that your leads see all possible combinations of your asset to properly assign a winner.
This type of test is great for web and landing pages, and you can gain a lot of information on what the lead engages with. However, multivariate testing is not for everyone because you generally need a large database and a lot of traffic for it to be truly effective. Think of all the possible combinations if you have a landing page and are testing for copy, CTA location, headline, form length, and so on. You can literally vary thousands of things, and often more.
Figure 19-2 shows a multivariate test from HubSpot.
Think of your lead-generation testing as a scientific process. It can be fun and even exciting at times to test what really makes your audience tick. Everyone eventually decides on a testing process that works for their own particular needs, but I want to provide a framework for testing that I have seen work in my experience.
The first step in the process is formulating your question. What are you testing and why are you testing it? An example would be, “What CTA works best in my email campaign?” Or, “Which web page design generates the most clicks and form fill-outs?” This stage can also involve looking at your previous campaigns to determine what has worked in the past. That way, you can create an educated hypothesis.
Before you create your hypothesis, you also want to determine how you will define success and what your success metrics are. According to testing platform Optimizely, for lead generation, you want to look at macroconversion successes, primary conversions that turn into clicks, conversions, and ultimately leads. You also might want to think about other microconversion metrics, smaller scale conversions, in the form of steps that you want a lead to take, such as clicking a button, watching a video, or liking and sharing a blog post.
A hypothesis is an educated guess based on the knowledge that you have obtained while thinking about both your question and your success metrics. Your hypothesis should determine what possible combination of variables might work best to achieve your ultimate success metrics. As stated by Optimizely, “Hypotheses make tests more informative because they provide a specific purpose by helping you hone in on what you are actually trying to determine.” An example of a hypothesis might be that you believe a form with three fields might have a better conversion rate than a form with five, based on your research and previous campaign performances.
Now on to the fun part: actually conducting your test. This is where you investigate your hypothesis so you can prove or disprove your theory. There are many best practices to testing and many elements to think about. The following list gives some easy-to-follow steps to conducting an A/B test:
For the sake of keeping it simple, I have chosen to use an A/B test as an example. Because it is an A/B test, we are going to focus on one variable that we are going to isolate. I'll use an example email A/B test conducted at Marketo using two different From names. The control email’s From address, which Marketo had been using for some time, was Marketo Premium Content. The test email’s From name was the personal email address of a sales rep.
It's tough to prove a hypothesis if your test sample size is small. The larger your sample size can be, the better your test results are. But you have to be careful. For our email example, if you use too large of a sample size, you risk having a large portion of your database receiving a less-effective email. You have to strike a balance right in the middle.
Many factors can affect the results of your test, so try to eliminate variables that would render your test obsolete. For this example, you don’t just need to leave your control unchanged, but you also need to send your emails at the same time and make sure your test is randomized. If you are using a marketing automation solution, you can usually conduct A/B testing quickly and efficiently by sending 50 percent of your emails to a random sample of your designated list.
After you have sent out your test, wait and examine your results to prove or disprove your hypothesis. First, you need to look to determine if a statistical significance, the probability that results are meaningful and not because of chance, exists between your two versions. A helpful hint here is to do an online search for A/B testing significance calculator. Figure 19-3 shows an example of a significance calculator from Visual Website Optimizer. You input the number of visitors for both the control and variation, as well as the number of conversions. And then you can calculate the significance.
You can also use calculators to determine the confidence score, so you know just how significant your test is. A 95-percent or more confidence score is where you want to be to know your test is significant.
Back to the From name test. The personalized From name had 1,000 more opens and 500 more clicks than the control name. Our confidence level in our results was 99 percent. And because we only isolated one factor, it was clear why the email with the personalized From name received more clicks.
Next, of course, you want to optimize your campaigns based off of your test results. This should be fairly simple and straightforward if you are always testing as I suggest. Take your results and implement them!
There are so many things you can test. Need some ideas? Take a look at this handy list courtesy of Dan Siroker, CEO and cofounder of Optimizely, and Pete Koomen, president and cofounder of Optimizely:
3.144.89.238