The “why” matters for A/B Testing too

A/B testing is one of the most powerful tools for multiplying your impact. It’s also one of the most underutilized, and in many cases misunderstood tactics. In many respects success all comes down to your “why” when testing, or to put it into scientific terms, you need a hypothesis if you expect to get sustained learning.

Many ministry leaders we talk with are either not testing, or are testing but haven’t been able to drive value from their tests. While the mechanics of testing are simple there are three pitfalls that most new conversion testers fall for every time. 

1. Not having a hypothesis (forgetting about the why) - Testing is as simple as making a single change and waiting to see how your audience responds. However, if you don’t document why you think the change will drive better results then testing just becomes an endless cycle of random changes, and improvement is the result of luck rather than methodical growth.

2. Making your test too narrow - Testing low traffic areas of your website or even small changes on high visibility campaigns are less likely to get meaningful results. Many people worry that dramatic changes will turn people off, however, if you are testing elements that are intentionally unnoticeable you’ll find that your users won’t notice and respond.

3. Giving up too soon - When it comes to testing the majority of your tests will not yield significant results. The good news is that if you test long enough you’ll realize most changes you make won’t make a difference with the user experience, so you can be confident making bigger and bigger tests. The downside is that it’s easy to get discouraged when you are not seeing results. Remember that continual improvement is a long-term process, and testing is a skill so the more you do it the better you will get. 

4. Testing too many variables at once - This can happen in two different ways, the first which is most common, is changing more than one element between the “A” versions and the “B” version. This is problematic because it is impossible to determine which element change caused the improvement, which ruins your chances of learning anything. The way this happens is when only one element is changed per variation but there are many variations (i.e. Multi-variant testing) this is not always an issue, however if you don’t have enough traffic to get all variants to a statistically sound sample size you run the risk of the test taking too long, or never reaching a trustworthy result. As a general rule, when you’re just getting started go for A/B testing over multi-variant and always be sure to only test one variable per variant. 

Keeping these three pitfalls in mind it’s time to get testing. If you’ve been testing let us know what you’re working on and some of things you’ve learned along the way. 

Happy testing!