A/B test gotchas

An important point I want to make is that the results of an A/B test, even when you measure them in a principled manner using p-values, is not gospel. There are many effects that can actually skew the results of your experiment and cause you to make the wrong decision. Let's go through a few of these and let you know how to watch out for them. Let's talk about some gotchas with A/B tests.

It sounds really official to say there's a p-value of 1 percent, meaning there's only a 1 percent chance that a given experiment was due to spurious results or random variation, but it's still not the be-all and end-all of measuring success for an experiment. There are many things that can skew or conflate your results that you need to be aware of. So, even if you see a p-value that looks very encouraging, your experiment could still be lying to you, and you need to understand the things that can make that happen so you don't make the wrong decisions.

Remember, correlation does not imply causation.

Even with a well-designed experiment, all you can say is there is some probability that this effect was caused by this change you made.

At the end of the day, there's always going to be a chance that there was no real effect, or you might even be measuring the wrong effect. It could still be random chance, there could be something else going on, it's your duty to make sure the business owners understand that these experimental results need to be interpreted, they need to be one piece of their decision.

They can't be the be-all and end-all that they base their decision on because there is room for error in the results and there are things that can skew those results. And if there's some larger business objective to this change, beyond just driving short-term revenue, that needs to be taken into account as well.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.107.66