One of the most important factors to consider when designing for websites is user experience. It’s something that is designed with pleasing aesthetics in mind but is not typically thought of as being data-driven. This is where A/B testing, or split testing, comes into play.
This is a process that allows you to introduce alternative test versions of a conversion tool, landing page or product page to members of your target demographic. Then, you can determine which performs better. While mistakes are perfectly acceptable, the most dangerous ones are those you don’t realize you’re making.
The following article will identify and describe three common problem areas where the margin for error is high, and provide actionable solutions. For starters, we’ll look at something we’ve already hinted at.
1. Focusing on Design Instead of Solving Conversion Problems
Your website’s ultimate purpose is to convert its visitors, and to do this you need to become familiar with what makes your audience follow through with their purchase intent, and at which point they abandon the checkout process.
Designers are all too aware of the fact that their websites only have 5 seconds to capture a visitor’s attention and keep it. This leads them to believe visual design is of paramount importance and that copy plays a less significant role in conversion. Not true.
When Alhan Keser of Widerfunnel.com first began conducting A/B testing, he assumed as much and the results of his design/useability-related tests were underwhelming. The problem with presenting two entirely different designs is that your test subjects will respond to each immediately and half generally navigate away from the page – it really is like tossing a coin.
This does not address the issue at hand – namely that sales are caused by reassurance. You’ll want to look at how bold headlines affect consumer motivation and how your tests – yes, multiple tests – can be carried out across other marketing channels.
Create A/B tests that display different calls to action and keep track of your analytics. Make sure you understand the basics of color theory in conversion so you can gain a much clearer picture of what’s going on and target areas specifically related to the checkout process.
2. Not Distinguishing Between Mobile and Desktop Users
Almost everybody with access to the Internet will be familiar with the frustration of navigating websites that are not optimized for mobile devices. In many instances, the user will decide that even if they really want what your site is selling, it simply won’t be worth the hassle.
When you’re efforts in A/B testing pull from test groups that are exposed to content on different devices, your end results will be skewed. As a recent study by Pew Internet Research found, 34 percent of U.S. Internet users shop online mostly using their mobile phones. If you cannot control this and target specific platforms, you will run into trouble.
The mistake is in believing that variations in screen size do not impact the results of your tests. Your initial data might look promising and prompt you to implement changes earlier than you otherwise would have, and this further compounds the distortion.
When sample sizes are restricted, you might be inclined to think your data is more accurate, and sometimes it certainly seems like it. However, what can happen is that even if your insights point towards positive results and come back with a 95 percent chance to beat the original set of parameters, they may not be replicable.
Take steps to ensure the users you target are actually exposed to the test subject or subjects in question before you introduce any variables and track their behavior.
Test your responsive designs by restricting the audience to mobile-only users, or establish a separate mobile-optimized website and conduct new tests accordingly. Just be aware of the situations that can arise when your data is gained from broad sources and do your best to cater to specific browsing devices and consumer groups.
3. Testing Micro-Conversions and Expecting Big Wins
Oftentimes designers’ mentality permits them to think that all it would take for dramatic improvements in conversion rates to occur is to change one small thing. The idea is nice, but it simply isn’t a sound basis for conducting your tests.
While micro-conversions are important, they do not take into account the bigger picture and thinking otherwise may lead you in a direction that ultimately doesn’t yield the results you expect in practice.
In the event this happens, you will have wasted hours of your time and will be left with unusable data. It may then take you some time to revise your testing strategies and come up with something that will work.
If small changes provided significant and sustainable gains over the long term, there would be no real need for anyone to conduct any A/B testing.
Focusing on macro-conversions and testing for small changes after earlier attempts yield results in support of your big changes can help maximize positive outcomes. As such, you should focus on more than just changing and testing the color of a button.
The Practical Aspects of Conducting A/B Testing
On one final note, remember to double- and triple-check your outcomes are accurate before you make any changes to your designs or content.
Hopefully you will have gained a few new insights into how you can carry out your A/B testing campaigns more effectively. As long as you learn from your past mistakes and avoid repeating the same ones, your website will get to where it needs to be.
Want to start testing your site today? Right now, JotForm has partnered with Unbounce to help you A/B test your landing pages and web forms. They are offering 50% off of Unbounce for the first three months after your free trial. Click here to read more about the partnership and get the promo code.