Utilizing A/B testing can be a great way to make data-driven decisions, especially in marketing, digital product design, and user experience. With a well-designed A/B test, you can let the data speak for itself and create more bottom-line value for your business.
The basic setup of an A/B test has two variants (Version A and Version B) of an ad, a website, or an email, for example, and compares a metric such as click-through rate or conversions between the two variants to determine a winner. When done correctly, A/B testing can lead to better, more informed decisions. However, there are some pitfalls to be aware of which can lead to ill-informed or wrong decisions.
3 Do’s of A/B Testing:
- Do plan your test before you start. A well-run A/B test should be fully planned before testing begins. This includes documenting what actions will be taken based on the outcome. One of the most important considerations in A/B testing is how long to run the test. Not gathering a large enough sample can lead to an underpowered test and inconclusive results. A sample size calculator, such as this one created by Evan Miller, can help you decide from the onset how long to run your test.
- Do define clear metrics that you can be measured from the test. You want a single metric that makes the exact definition of “success” clear, such as sales, subscriptions, or clicks. While factors like brand reputation are things you always want to improve, you cannot exactly measure them through tracking clicks on a website.
- Do randomly assign users. When doing an A/B test, you are really conducting a controlled experiment and therefore, you want to eliminate any external factors, such as day of the week, holidays, browser, mobile vs. desktop, or anything else that might influence the outcome between the Version A group and the Version B group. The best way to do this is to randomly assign users to one version and make sure that is the only version they see. These versions should be run simultaneously with each user seeing one and only one version so that you’re not comparing data from the past with current data.
3 Don’ts of A/B Testing:
- Don’t make multiple changes in the same test. The idea is to isolate one feature at a time and to make incremental changes. Go with simple experiment designs and iterate.
- Don’t jump to conclusions too quickly. You need to apply proper statistical techniques, which means properly sizing up the data to identify what is really showing a lift and what is just random noise. Similarly, don’t stop an A/B test before the target size is reached, as this can invalidate any conclusions.
- Don’t be discouraged if your ideas don't pan out. Even Microsoft, a pioneer in A/B testing was wrong most of the time about features they thought a customer would want. Similarly, Netflix estimated that 90% of what they tried turned out to be wrong. The key is to fail fast and move on to new ideas.
By keeping these points in mind, you can start letting the data guide you to better decisions and more business value.
Want to learn how to conduct an A/B test?
In our A/B Testing Workshop, you’ll learn the foundations of A/B testing and get hand-on practice with how to plan and scope, execute, and analyze statistical tests in your own work environment.