A/B testing has been helping market researchers optimize their tactics for years. Google started using it in February 2000 to determine the number of items to display on its search engine results page. Other corporations, like Amazon and eBay, soon followed by testing potential site changes on users.
Today, A/B testing is commonly used to test two or more alternatives of marketing campaign elements such as website landing pages, promotions, advertising, email content and direct mail campaigns. It’s also been used for software applications and other products or services. The purpose is to establish which version optimizes the desired behavior or conversion rate.
For example, a charity might use it to learn which type of appeal generates the most contributions from its donor base. It could send 1,000 people a letter with a personal story of someone who has benefited from the charity’s activities, and another 1,000 people a letter documenting how the activities of this charity have helped the community.
Each campaign is randomly assigned to an individual. It tracks the number of people donating in response to each mail campaign. The two conversion rates would then be statistically tested to see if one approach is more effective than the other in generating the desired outcome: persuading individuals to make a donation.
The A/B Advantage
A/B testing has the compelling advantage of improving the bottom line by increasing conversion rates. Using primary data from controlled tests to determine the best strategy and tactics for marketing efforts also eliminates the guesswork when developing new marketing campaigns. What works is already known.
Besides being inexpensive to execute, it accurately compares performance between alternatives and measures actual behavior to ensure reliable results.
When is an A/B Test Appropriate?
An A/B test is not always the most effective tool for measurement. It is the best fit for the following conditions:
- You have a clear goal in mind.
- The test you design will have clear, measurable results.
- The product is finished, other than the component you’re testing.
To eliminate complexity in backend analyses, only one outcome at a time should be measured—number of clicks on a CTA or number of donations generated, for example. An A/B test is not suitable for providing metrics on more holistic concepts like customer satisfaction or loyalty. It’s most powerfully applied to marketing efforts that (except for the parameters being tested) are in their final form.
This could also take the form of testing the rewards you offer survey participants—if you keep all other factors the same except for the reward offered at the end, this, in turn, informs your strategy for future incentivized market research campaigns.
A/B testing does not always translate well to predicting actual market performance and bottom line improvements. This is usually because the A/B methodology has been poorly executed. Many web A/B testing specialists ignore three statistical concepts that impact test reliability and lead to erroneous conclusions. Here are the problems:
- Small sample sizes. It’s necessary to have a sufficient number of respondents to detect a difference when one actually exists. There are two dangers when the sample size is not large enough, or the test is not run for a long enough period: an uplift is present but cannot be detected, or an uplift is detected that isn’t really there (a false positive).
- Multiple testing. Doing multiple tests also increases the probability of false positives, as does ending a test when a positive result is achieved. Some A/B software that uses multiple testing can increase the probability of a false positive.
- Not accounting for regression. If a variable is extreme on its first measurement, it will tend to be closer to the average on its second measurement and vice versa. Tests that initially deliver significant uplift but lose their uplift over time are probably false positives.
A/B testing offers a useful method of optimizing the elements of marketing strategy, tactics and product development. It goes without saying that all research must be grounded in the principles of the scientific method and statistical theory, and the A/B test is no exception. Adhering to these rules will elevate this methodology and boost confidence in the results that it delivers.
Feature photo: Jacob Lund / Shutterstock.com