A/B split testing is no longer an enigmatic term amongst web professionals; countless articles and books cover the basics. What more, access to tools such as Visual Website Optimizer (disclaimer: this is my startup)—which simplify the setup and maintenance of A/B tests have—have made the testing process itself as straightforward as possible. Despite this, though, A/B split testing isn’t part and parcel for UX designers and internet marketers. The question then becomes: why not?
If practitioners have a firm grasp of the concepts behind A/B testing as well as tools to aid them in the process, the only thing detering would-be testers is the notion of “what to test,” and “why?” In this article, we’ll take a look at elements of a website that should affect its users.
Fix your primary test objective
Most websites serve multiple goals. Take, for example, UX Booth. Three hypothetical objectives of this blog are: a) getting more subscribers; b) increasing visitor engagement (measured in terms of how many visitors comment/participate in discussion); and c) increasing clickthroughs on advertisements.
In an A/B test, it is important to consider only one goal at a time. Though you can always measure performance on multiple goals, when it comes to creating variations, your primary goal should always set the stage for one objective. Other goals should be monitored only to be sure that your tradeoffs are balanced. For example, if this blog is optimizing for subscribers by testing a larger sized RSS icon, it may be the case that the winning variation is actually decreasing total revenue from advertisements. Hence, while doing testing, you should keep an eye on all website goals. Consider integrating your test with a web analytics software to measure impact of variations across the site.
Decide what to test
Once you have determined your primary test goal, the next step is to think which elements on the website can possibly influence performance towards that goal. This isn’t straightforward; I consider it as the most important (and perhaps hardest) part of doing an A/B test. Theoretically, all elements on the page influence visitors’ decision to complete website goals which means that, ideally, all major elements of the website should be tested. This strategy is obviously impractical. That is why you should prioritize which elements on your test you should test.
Here are some ideas:
-
Usability testing
For getting ideas, a good strategy is to conduct a usability test specifically with a forthcoming A/B test in mind. Instructions to the participants should be to complete your primary website goal. At the end of the usability test, ask them what influenced their decisions. Their responses will pinpoint you to elements that matter. Recently, I did a usability test using FeedbackArmy where I asked whether the participant would like to signup for my product. A major pattern emerged in the responses and it indicated that there is too much text on the homepage and the text is a bit too technical.
-
Feedback from friends and colleagues
As website owners/designers, we know our work intimately and therefore we have lots of blind spots when it comes to evaluation. Honestly, unbiased feedback from friends and colleagues really shines here. Take note of their comments such as ‘The download button was hard to notice,’ ‘Is it a free service?’, etc. A great thing about feedback from friends is that you can follow up with more questions and even ask for improvement suggestions. An ideal candidate for getting feedback is someone who is friend of yours in a non-business setting and who is vaguely aware of what your website is about. This is also known as a hallway test.
-
Web analytics data
Mine your analytics to determine what prevents your visitors from completing the goal. For example, if increasing signups is your goal, determine whether visitors aren’t visiting the signup page or are simply bouncing off the page after arriving on it using your web analytics tool. In the former case, you need to test the signup button or link on the website; in the latter, you need to test the signup form on the signup page. Similarly, if your objective is to increase sales during the checkout, you may be tempted to test the size and color of the Buy Now button. However, before jumping to conclusions, see which pages your visitors browse to right after the checkout page. It may be the case that many visitors are visiting your shipping policy page instead of completing the checkout process because it wasn’t clear to them what you charge for shipping. This information will change your priority for testing: a site design where shipping policy is clearly laid out before the visitor enters the checkout process.
-
Heatmaps/clickmaps
Heatmaps provide a visual representation of where visitors click on a page. For example, a heatmap for your homepage may reveal that your visitors aren’t scrolling below the fold to notice your signup button. A good test in this case would be to test against a version where signup button is above the fold. Crazyegg and Clicktale are good tools for this. If you’re looking for a comparison, be sure to checkout UX Booth’s roundup of these tools.
Create test variations
The third and final step is to create variations for the element(s) you have selected in the previous step. Now is the time to get into the creative mode but there is a tradeoff—more variations means more time required for running the test. Selecting a few variations that have a good chance of beating the original is important. Again, coming up with creative ideas for test variations shouldn’t be a random process.
Pouring through existing A/B split testing case studies can be a fantastic source of ideas. Although they aren’t unique to your website, there is a good chance that they’ll kickstart your testing. Search the Internet or use A/B Ideafox for browsing test case studies specific to your industry and your test objective. Try not to copy exact variations but rather look at general patterns tested in the case study. For example, you may notice that converting a download link into a button increased sales in one case. In other studies, you may notice that moving the location of the promotional message on homepage decreased bounce rate.
The key message here is that usability testing, web analytics data, and case studies help practitioners get the maximum out of an A/B test. With proper preparation, your A/B test need not be a shot in the dark. Rather you can increase chances for success, laying a good basis by selecting the right elements to test and then coming up with good variations.
Good luck with your next A/B test!
Want a beta invite?
Visual Website Optimizer is currently in beta but if you want to use it, signup for a free account using the invite code “uxbooth” (without quotes). This is an exclusive offer for UX Booth readers.
Further Reading
- How to Increase Site Performance Through A/B Split Testing
- ABTests.com – a repository of case studies
- Anne Hall’s WhichTestWon – a weekly updated blog on A/B test results
- MarketingExperiments.com – subscription-based but has a lot of free articles
- A/B Ideafox – a search engine for A/B split and multivariate test case studies
About the Author
Paras is the founder of Wingify, a startup in web analytics/optimization space. Their first product Visual Website Optimizer is the world’s easiest to use A/B, split and multivariate testing tool. His aim with the product is to take the fear out of A/B split testing and bring this methodology to fortune 5 million businesses. He regularly posts detailed articles, tips and tricks on conversion optimization blog and I love split testing blog. You can follow him on Twitter @wingify and @paraschopra.
UX research - or as it’s sometimes called, design research - informs our work, improves our understanding, and validates our decisions in the design process. In this Complete Beginner's Guide, readers will get a head start on how to use design research techniques in their work, and improve experiences for all users.