If you're making all of your website decisions based solely on the results of a single test, you're going to be disappointed.
Most of the conversion rate optimization tests we perform will fail or won’t do anything to change our conversion rates.
Growth is not a project; it’s a relentless grind. You never know for certain if any one of your tests will win, but if you play the game long enough, those wins will start to stack up.
Prepping for The Growth March
Performing numerous A/B tests takes plenty of preparation. Here’s the overview of what you need to succeed:
- Get your data in order. This is absolutely critical.
- Tie test segments through the entire funnel.
- Get your team in place. If your tests involve multiple teams, that’s going to be really tough. Unless you get everybody on the same team, it’s going to be a lot more difficult to hit the tempo and pace you need to hit.
- This team should include a Project Manager with a background in marketing/product and an interest in acquisition, a designer who loves iteration and results and a front-end engineer (full-slack if you can spare it).
- Get your levers prioritized. Don’t waste tests on assets that only impact a small number of people.
- Fix any broken steps in your funnel.
- Start at the top of the funnel and work your way down. There’s more data at the top, so you can test faster.
- Work on the assets that drive the most conversions such as your homepage and key landing pages.
- Get a budget for testing.
- Make sure your testing priorities are in order - don’t waste tests on things that don’t impact people.
- Get away from “growth hacks” that don’t move the needle the vast majority of the time.
You need a stream of insights from your customers to find out where the real value is. You can get these insights from interviews with recent customers, surveys from people who abandoned the funnel, Qualaroo popups at funnel bottlenecks, and heatmaps.
7 Rules for A/B testing
False positives will destroy your gains. You can accidentally tank your conversion rate and lose six months of growth. When I test, I’m not looking for winners. What I’m far more concerned about is avoiding losers, especially ones that look like winners.
To optimize your testing, cycle through as many tests as fast as possible while only launching confirmed winners. These are the 7 A/B testing rules I live by:
- Above all else, the control stands. Unless you have a proven variant that’s better than the control, the control stays.
- Get 2,000+ people through the test within 30 days. Additionally, 200 conversions should be able to give you enough data, so if you have lower traffic but a higher conversion rate, your tests can still stand.
- Wait at least one week before checking your data. The first few days are highly volatile and your data will bounce all over the place.
- Only launch variants at 99% statistical significance. Not 95%. That’s because the 95% number depends on calculating the sample size required ahead of time. Bumping that number up greatly reduces the number of false positives.
- If the test drops below 10% improvement, ditch it. The opportunity costs of not running other tests is too high to miss out on a 30-40% win because you wasted time on a 10% bump.
- If you don’t have a new winner after a month, ditch the variant. As soon as you know it’s not a big win, move on.
- Get the next test ready while you wait. No downtime.
Do this for 9 to 12 months, and you’ll double or triple your conversion rates. Easily.
This post is a recap from Lars's presentation at our 2015 Go Inbound Marketing Conference. Check out his entire presentation on the Element Three YouTube channel.
Feed your marketing mind and keep your skills sharp by opting into our weekly newsletter, packed with lessons we’ve learned firsthand. You won’t regret it.