Everybody in the online space has heard the phrase “A/B split-testing”. But many marketers and retailers are hesitant when it comes to conducting their own tests.
They’re unsure about how to coordinate all the different parts of the split-testing process, from brainstorming to software selection to the analysis of results.
And while A/B split testing isn’t as easy as most people think, it’s far from difficult. Spending some time to implement well-structured and tested A/B split-testing processes will be hugely positive for your online store.
And while A/B split testing isn't as easy as most people think, it's far from difficult. Spending some time to implement well-structured and tested A/B split-testing processes will be hugely positive for your online store. Click To TweetIn this guide, you’ll be given a simple formula for conducting split-tests. You’ll also learn about common mistakes, see some real-life case studies, and receive practical and specific tips about which on-site and off-site elements to test.
What is A/B Testing?
How to Do A/B Testing: an Overview
1. Analysis
2. Recommendations
3. Prototype and Design
4. Code and Test
5. Results
How to Calculate Your A/B Split-Testing Sample Size
Which Product Page Elements Should You Split-Test?
Top 11 Ecommerce A/B Split-Testing Mistakes to Avoid
A Review of the Best A/B Split-Testing Tools for Ecommerce
Examples of Ecommerce A/B Testing Case Studies
1. Budapester
2. Reserved
3. 4F
Conclusion
Let’s start!
A/B testing involves driving traffic to two different pieces of content – like ads, emails, web pages, and so on – to see which one performs better. Often, the only difference between test subjects is a single element such as a headline, CTA (Call to Action), image, piece of copy, etc. Alternatively, split-tests can be between two completely different content-types, such as Facebook ads, marketing emails, or even entire sales funnels.
So what do variants “A” and “B” in A/B testing usually represent? Whenever you run a test, you need a set of “ground level” or “control” results. “A” typically comprises your current results or the first iteration of your testing variant. “B” is the variation that you will compare the results of “A” against.
A generic example of an A/B test. (Source)
Let’s say, for example, that a product page receives several hundred visitors a day. You decide to run a split-test in which you add a notification about next-day delivery next to the “Add to Cart” button. So, using your A/B testing software, you create a lookalike page and split traffic equally between both pages and measure the results. The current page is test subject “A”. The variant is test subject “B”.
Alternatively, you may be gearing up for a promotional email campaign in which you direct your subscribers to a landing page to enter a free giveaway. You have built two landing pages – “A” and “B” – but you want to see which one attracts more entrants. Again, using your split-testing software, you drive half of the email traffic to page “A” and half to page “B”. Even though you don’t yet have any results, “A” is the control page, and “B” is the challenger.
“Multivariate testing” works on the same principle but involves testing variants that include multiple changes. The aim is to determine which combination of variables performs best. In an A/B split-test, for example, you may test a green CTA button against a red one. In a multivariate test, you might change both the color and the CTA text at the same time. In a test with two on-page changes, this would create four variants:
The benefit of multivariate testing is it removes the need to run lots of split-tests one after the other. The downside is that it requires a lot of traffic.
Let’s look at a basic formula for conducting A/B split-tests. Don’t worry too much about the technical aspect of A/B testing at this stage. There is a range of tools available to streamline and automate everything from page creation to the interpretation of results, and we’ll outline the best apps and solutions in a moment.
A/B split testing is usually either on-site or off-site. On-site tests cover things like product pages, landing pages, checkout forms, and so on. On-site tests might also be conducted on the pages of an app – if, for example, you have a mobile shopping or loyalty program app. Basically, “on-site testing” is for any page on your site that has a singular goal and corresponding primary CTA.
Off-site tests are for variants of ads (especially paid advertisements), emails, social media posts, push notifications, and so on.
It’s essential to run tests at the same time with the same traffic sample. Traffic and time period constitute the two biggest variables that can skew results. There’s absolutely no benefit, for example, in comparing the results of two variants if one was tested on Halloween and the other on Mother’s Day.
It's essential to run tests at the same time with the same traffic sample. Traffic and time period constitute the two biggest variables that can skew results. Click To TweetUse the following process to structure your own A/B tests:
In this stage, you determine your goals and prioritize which page elements to split-test.
Goals will center around boosting your key conversion metrics and “conversions” might be clicks, sign-ups, or sales. You might even opt for “broader” success metrics like engagement or reach, especially when testing ads. Whatever the case, you need a clear metric by which to measure the relative success or failure of variants.
Once you’ve set goals, you can research and prioritize which tests to run. You should research those page templates (product pages, category pages, checkout forms, etc.) that are most important for your goals and then determine which have the greatest potential for improvement. Look at pages with high bounce rates, unusually low conversions, low engagement, high abandonment rate and so on.
Once you’ve identified pages which are both important and have potential, you should rank them according to the ease with which you can run tests. It’s better, especially when implementing a new strategy, to go for the lowest-hanging fruit, moving onto more complex tests as you acquire more data. This methodology will deliver the greatest returns over the shortest period of time.
Your chosen testing element might be a call-to-action, a headline in ad copy, an image on a landing page, a subject line in an email, or a social media post advertising a discount. The critical thing to remember is that testing subjects should usually constitute one element, with everything else remaining the same. The exception to this rule is when you are testing two separate variations, such as landing pages or sales funnels which are made up of unique emails and pages.
After you have identified which tests you want to run, you need to brainstorm variations and form hypotheses.
Ask the question, “Which changes might lead to better outcomes for pages and why?”
A hypothesis is an evaluation of why a page or element isn’t performing as well as it might and how you might improve it. When you run an A/B test, you are essentially testing a hypothesis.
You might conclude, for example, that your current product page CTAs don’t stand out enough and that visitors have trouble finding them. The way to solve this problem would be to use a brighter color for the CTA button.
The best way to formulate hypotheses is to use the following simple template: If…, then…, because….
Let’s look at an example:
If information about low stock is added to product pages next to the CTA, then the add-to-cart rate (and thus the conversion rate) will increase, because urgency-building elements prompt visitors to take action.
After forming hypotheses, many people jump in and start organizing the test. But it’s essential to properly brainstorm and verify different design options, ensuring that the whole team is on board and that all ideas are accounted for.
You should begin by creating loose wireframes of proposed changes, brainstorming as many possibilities as is feasible. After verifying those that seem most promising, you can create full prototypes for implementation purposes.
Begin by calculating your sample size. Your “sample size” is the amount of traffic you need to conclusively say that any differences in results are not due to chance. We cover this topic in-depth in the next section. If you are not currently driving high levels of traffic to a specific page, or your site is in the development stage, you can always buy traffic. Many services exist for this purpose.
Then, with the groundwork in place, you can select the right tools and begin the test. Different tools serve different testing needs. For individual page elements, a simple web editor is all that’s needed. For more complex split-tests, such as a comparison of different sales funnels, sophisticated tools may be required. Dedicated software is also available for email marketing and ad campaigns.
If you have a dedicated development team for implementing on-site code, the designs you created in the previous step will prove invaluable here.
Once the test has run its course, you can evaluate results and formulate new split-tests. Evaluation has two purposes: to determine a winner and generate new ideas for future tests. Sometimes results will be inconclusive, causing you to revise or abandon your original hypotheses. In other cases, results will be so significant as to prompt similar tests on other related pages or try even more advanced variations of your original change.
Split-testing is best conducted as part of a longer-term strategy. You should aim to make lots of small changes over many weeks and months. All of these changes will add up to dramatically and consistently improve your overall conversion rate.
Calculating your minimum sample size is relatively easy once you understand the underlying concepts.
Here are a few terms you’ll need to know:
Use this calculator from Evan Miller to calculate your minimum sample and read this post on statistical significance if you want to learn more
Product pages fit the criteria for picking testing candidates perfectly. They are among the most important and highest-traffic pages on an ecommerce site. They’re also easy to split test.
Here are some of the product page elements which can have the greatest effect on conversions:
If you would like to see some examples of top-performing product pages, along with ideas about how to boost conversions, check out our post on the topic.
If you?re looking for other testing inspiration, we?ve written the most comprehensive ecommerce optimization checklist available on the web (or anywhere, for that matter). Download it for free now!
When not done correctly, split-testing can be a colossal waste of time and money.
Avoid making the following mistakes:
A/B split-testing should encompass most aspects of your marketing and sales activities. It shouldn’t be limited to your site. Most dedicated apps, such as those for your email marketing, Facebook advertising, social media, and so on, will come with their own A/B split-testing tools.
This list outlines the best tools for running split-tests on your ecommerce site. Furthermore, there isn’t an all-out “best” tool when it comes to A/B split testing. Different solutions are designed for different types of online stores, and the best choice of software depends on a range of factors, including size, industry, preferred marketing methods, and more.
Here’s our rundown of the top five ecommerce A/B split-testing tools:
So what does an A/B split-test look like in practice?
Here are three examples from Growcode’s own case files:
Budapester is a large online retailer that sells designer bags, shoes, and accessories. The company wanted to implement a long-term testing plan that was cost-effective. Analysis showed that product pages and the shopping cart had the greatest potential for improvement.
Read the full case study here.
Result: Conversion rate increased by 12.5%.
The following hypotheses were formulated and tested:
Hypothesis one: clearer communication of the USP on all pages would boost conversions.
Before: The USP, which includes free shipping and immediate product availability, was not shown on product pages.
After: The USP was included below the product description and in the header.
Hypothesis two: the header was taking up too much space and distracting visitors with unnecessary links and information.
Before: The header was unclear, with lots of small buttons, hard-to-read text, and unnecessary links.
After: The header was simplified and the main buttons were made clearer.
Hypothesis three: a streamlined shopping cart would reduce cart abandonment.
Before: On the purchase confirmation page, information about free delivery was not shown and discounted prices were not highlighted.
After: Free delivery, availability, and discounts were all included in bright colors to make them noticeable.
Reserved is the biggest fashion retailer in the CEE region. The online store was launched in 2013.
Result: 4.6% conversion rate increase.
Read the full case study here.
The following hypotheses were formulated and tested:
Hypothesis one: Adding the USP to main pages – home page, product pages, and category pages – would help to persuade visitors of the unique benefits of shopping with Reserved.
Before: No clear USPs were displayed on the home page.
After: The USPs were displayed on the homepage just underneath the header.
Hypothesis two: Including the USP on shopping cart pages would reduce cart abandonment.
Before: Certain USPs were shown, but they were not clearly explained. Information about free delivery and free courier delivery on purchases over $50 was not shown.
After: A section displaying information about USPs was included on the right of the page.
4F sells sportswear and sports accessories. The company has built a reputation for quality – mixing traditional manufacturing processes with modern designs.
Result: 8% global conversion rate increase.
Read the full case study here.
The following hypotheses were formulated and tested:
Hypothesis one: Including detailed descriptions on product pages will alleviate doubt and prompt more visitors to add products to the cart.
Before: Product information was scattered, difficult to scan, and far away from the CTA.
After: Product details, including delivery information, were written to be scannable and placed next to the CTA.
Hypothesis two: Showing discounts as a percentage will prompt more customers to add products to the cart.
Before: The discounted price was struck-through and shown next to the current price, with no further information.
After: A figure showing the discounted price as a percentage was included next to the current price.
Hypothesis three: Showing information about in-store delivery would boost conversions because it is highly relevant to customers and 4F has a well-known chain of local stores.
Before: Information about store delivery was quite far down the page.
After: Direct shipping and in-store delivery details were displayed next to each other above the CTA.
As you can see, most of the page elements that were tested are fairly generic A/B testing examples. Despite the fact that they might seem “safe”, they can still drive significant boosts to conversion rates.
Armed with the information outlined in this post, you can start to conduct tests that drive real results and move you closer towards your conversion and revenue goals.
But there’s an important point to keep in mind.
Don’t forget the importance of continual and consistent split-testing.
Implementing an optimization campaign that involves making lots of small changes over time will place you well above your competition. It's the strategy that big players like Amazon use. Click To TweetImplementing an optimization campaign that involves making lots of small changes over time will place you well above your competition. It’s the strategy that big players like Amazon use to achieve conversion rates well above the industry average.
They are listed in our free ebook: get the Ultimate Review of ALL 2022 Ecommerce Trends to know them all. 2022 is already here – better get your copy ASAP!