Everybody in the online space has heard the phrase “A/B split-testing”. But many marketers and retailers are hesitant when it comes to conducting their own tests.

They’re unsure about how to coordinate all the different parts of the split-testing process, from brainstorming to software selection to the analysis of results.

And while A/B split testing isn’t as easy as most people think, it’s far from difficult. Spending some time to implement well-structured and tested A/B split-testing processes will be hugely positive for your online store.

And while A/B split testing isn't as easy as most people think, it's far from difficult. Spending some time to implement well-structured and tested A/B split-testing processes will be hugely positive for your online store. Click To Tweet

In this guide, you’ll be given a simple formula for conducting split-tests. You’ll also learn about common mistakes, see some real-life case studies, and receive practical and specific tips about which on-site and off-site elements to test.

What you’ll find in this article:

What is A/B Testing?
How to Do A/B Testing: an Overview
1. Analysis
2. Recommendations
3. Prototype and Design
4. Code and Test
5. Results
How to Calculate Your A/B Split-Testing Sample Size
Which Product Page Elements Should You Split-Test?
Top 11 Ecommerce A/B Split-Testing Mistakes to Avoid
A Review of the Best A/B Split-Testing Tools for Ecommerce
Examples of Ecommerce A/B Testing Case Studies
1. Budapester
2. Reserved
3. 4F
Conclusion

Let’s start!

What is A/B Testing?

A/B testing involves driving traffic to two different pieces of content – like ads, emails, web pages, and so on – to see which one performs better. Often, the only difference between test subjects is a single element such as a headline, CTA (Call to Action), image, piece of copy, etc. Alternatively, split-tests can be between two completely different content-types, such as Facebook ads, marketing emails, or even entire sales funnels.

So what do variants “A” and “B” in A/B testing usually represent? Whenever you run a test, you need a set of “ground level” or “control” results. “A” typically comprises your current results or the first iteration of your testing variant. “B” is the variation that you will compare the results of “A” against.

A generic example of an A B test A generic example of an A/B test. (Source)
Let’s say, for example, that a product page receives several hundred visitors a day. You decide to run a split-test in which you add a notification about next-day delivery next to the “Add to Cart” button. So, using your A/B testing software, you create a lookalike page and split traffic equally between both pages and measure the results. The current page is test subject “A”. The variant is test subject “B”.

Alternatively, you may be gearing up for a promotional email campaign in which you direct your subscribers to a landing page to enter a free giveaway. You have built two landing pages – “A” and “B” – but you want to see which one attracts more entrants. Again, using your split-testing software, you drive half of the email traffic to page “A” and half to page “B”. Even though you don’t yet have any results, “A” is the control page, and “B” is the challenger.

Growth hack your ecommerce conversion rate, sales and profits with this
115-Point Ecommerce Optimization Checklist

“Multivariate testing” works on the same principle but involves testing variants that include multiple changes. The aim is to determine which combination of variables performs best. In an A/B split-test, for example, you may test a green CTA button against a red one. In a multivariate test, you might change both the color and the CTA text at the same time. In a test with two on-page changes, this would create four variants:

  1. color one and text one,
  2. color one and text two,
  3. color two and text one,
  4. and color two and text two.

The benefit of multivariate testing is it removes the need to run lots of split-tests one after the other. The downside is that it requires a lot of traffic.

How to Do A/B Testing: an Overview

Let’s look at a basic formula for conducting A/B split-tests. Don’t worry too much about the technical aspect of A/B testing at this stage. There is a range of tools available to streamline and automate everything from page creation to the interpretation of results, and we’ll outline the best apps and solutions in a moment.

A/B split testing is usually either on-site or off-site. On-site tests cover things like product pages, landing pages, checkout forms, and so on. On-site tests might also be conducted on the pages of an app – if, for example, you have a mobile shopping or loyalty program app. Basically, “on-site testing” is for any page on your site that has a singular goal and corresponding primary CTA.

Off-site tests are for variants of ads (especially paid advertisements), emails, social media posts, push notifications, and so on.

It’s essential to run tests at the same time with the same traffic sample. Traffic and time period constitute the two biggest variables that can skew results. There’s absolutely no benefit, for example, in comparing the results of two variants if one was tested on Halloween and the other on Mother’s Day.

It's essential to run tests at the same time with the same traffic sample. Traffic and time period constitute the two biggest variables that can skew results. Click To Tweet

Use the following process to structure your own A/B tests:

1. Analysis

In this stage, you determine your goals and prioritize which page elements to split-test.

Goals will center around boosting your key conversion metrics and “conversions” might be clicks, sign-ups, or sales. You might even opt for “broader” success metrics like engagement or reach, especially when testing ads. Whatever the case, you need a clear metric by which to measure the relative success or failure of variants.

Once you’ve set goals, you can research and prioritize which tests to run. You should research those page templates (product pages, category pages, checkout forms, etc.) that are most important for your goals and then determine which have the greatest potential for improvement. Look at pages with high bounce rates, unusually low conversions, low engagement, high abandonment rate and so on.

Once you’ve identified pages which are both important and have potential, you should rank them according to the ease with which you can run tests. It’s better, especially when implementing a new strategy, to go for the lowest-hanging fruit, moving onto more complex tests as you acquire more data. This methodology will deliver the greatest returns over the shortest period of time.

Your chosen testing element might be a call-to-action, a headline in ad copy, an image on a landing page, a subject line in an email, or a social media post advertising a discount. The critical thing to remember is that testing subjects should usually constitute one element, with everything else remaining the same. The exception to this rule is when you are testing two separate variations, such as landing pages or sales funnels which are made up of unique emails and pages.

2. Recommendations

After you have identified which tests you want to run, you need to brainstorm variations and form hypotheses.

Ask the question, “Which changes might lead to better outcomes for pages and why?”

A hypothesis is an evaluation of why a page or element isn’t performing as well as it might and how you might improve it. When you run an A/B test, you are essentially testing a hypothesis.

You might conclude, for example, that your current product page CTAs don’t stand out enough and that visitors have trouble finding them. The way to solve this problem would be to use a brighter color for the CTA button.

The best way to formulate hypotheses is to use the following simple template: If…, then…, because….

Let’s look at an example:

If information about low stock is added to product pages next to the CTA, then the add-to-cart rate (and thus the conversion rate) will increase, because urgency-building elements prompt visitors to take action.

3. Prototype and Design

After forming hypotheses, many people jump in and start organizing the test. But it’s essential to properly brainstorm and verify different design options, ensuring that the whole team is on board and that all ideas are accounted for.

You should begin by creating loose wireframes of proposed changes, brainstorming as many possibilities as is feasible. After verifying those that seem most promising, you can create full prototypes for implementation purposes.

4. Code and Test

Begin by calculating your sample size. Your “sample size” is the amount of traffic you need to conclusively say that any differences in results are not due to chance. We cover this topic in-depth in the next section. If you are not currently driving high levels of traffic to a specific page, or your site is in the development stage, you can always buy traffic. Many services exist for this purpose.

Then, with the groundwork in place, you can select the right tools and begin the test. Different tools serve different testing needs. For individual page elements, a simple web editor is all that’s needed. For more complex split-tests, such as a comparison of different sales funnels, sophisticated tools may be required. Dedicated software is also available for email marketing and ad campaigns.

If you have a dedicated development team for implementing on-site code, the designs you created in the previous step will prove invaluable here.

5. Results

Once the test has run its course, you can evaluate results and formulate new split-tests. Evaluation has two purposes: to determine a winner and generate new ideas for future tests. Sometimes results will be inconclusive, causing you to revise or abandon your original hypotheses. In other cases, results will be so significant as to prompt similar tests on other related pages or try even more advanced variations of your original change.

Split-testing is best conducted as part of a longer-term strategy. You should aim to make lots of small changes over many weeks and months. All of these changes will add up to dramatically and consistently improve your overall conversion rate.

How to Calculate Your A/B Split-Testing Sample Size

Calculating your minimum sample size is relatively easy once you understand the underlying concepts.

Here are a few terms you’ll need to know:

  • Baseline conversion – The conversion rate for your current page.
  • Minimum detectable effect – The minimum detectable effect is the minimum percentage change from the baseline conversion rate that you feel excited about: it can be 2%, 3%, 5% or 10%. In A/B testing it should rarely be above 10%. Of course, smaller uplifts are easier to achieve but harder to prove because you will need more users. On the other hand, bigger uplifts are easier to prove with fewer users. But it’s usually difficult to come up with a testing idea that is going to have such a profound impact.
  • Statistical significance – Statistical significance is the degree to which you are “sure” about your results. In an ecommerce setting, you should aim for 80% to 95% statistical significance.
  • Significance level – Significance level is the inverse of statistical significance. A 5% significance level, for example, means that there is a 5% chance that results are due to random chance. A 5% to 20% significance level is normal.
  • Statistical power – Often sidelined by A/B split-testers, “statistical power” is the percentage that describes the probability that a test will find the minimum detectable effect, assuming it exists. For example, say you set the minimum detectable effect to 5% and statistical power to 80% and, at the end of the test, your alternate version doesn’t win. You have 80% certainty that the losing version is not better by 5% or more.

Use this calculator from Evan Miller to calculate your minimum sample and read this post on statistical significance if you want to learn more

Which Product Page Elements Should You Split-Test?

Product pages fit the criteria for picking testing candidates perfectly. They are among the most important and highest-traffic pages on an ecommerce site. They’re also easy to split test.

Here are some of the product page elements which can have the greatest effect on conversions:

  • Title – The title is the first thing that customers see when they land on a product page. It identifies the item and distinguishes it from other products. You can experiment by including (or excluding) brand names, key features, and USPs, and sampling different versions of the generic product name.
  • Images – Product images can significantly affect conversions. In particular, the flagship product image – the one that customers see first before scrolling through subsequent images – carries a lot of weight. Run different variations of this image to see which one customers find most appealing.
  • Description – Persuasive descriptions compel customers to click the primary CTA. Experimenting with descriptions by adding persuasive elements to your copy can yield interesting results. Consider citing awards, mentions in the media, celebrity endorsements, stand-out reviews, and more.
  • Price – Virtually every single visitor to a page will look at the price. Numerous changes can be tested, including color, size, location, and any information included immediately next to the price – such as the original struck-through price before discounts or a deadline for a promotional price.
  • Feature Options – Often, visitors will need to select item features like color and size before purchasing. If these options are unclear or difficult-to-use, it can create a lot of friction for buyers. Ambiguous stock levels can also lead to uncertainty.
  • Delivery information – Shipping time and cost is another major factor in the decision-making process. You can eliminate doubt by showing delivery information in the right way, and even increase willingness to buy by prominently showing free, same-day, or next-day delivery.
  • CTA – This is a big one. Three features are most important when it comes to CTAs: shape, size, and color. CTAs should stand out from other elements on the page and be easy to click, especially on mobile.
  • Star rating – Online purchasers love reviews. Consider testing variants of a star rating shown underneath your headline and make it easy for customers to navigate the section on product pages dedicated to reviews.
  • Urgency-building features – Urgency-building elements – like countdown-timers, time-limited delivery, special discount prices, and so on – can dramatically boost a page’s conversions. Learn more about building urgency on product pages.

If you would like to see some examples of top-performing product pages, along with ideas about how to boost conversions, check out our post on the topic.

If you?re looking for other testing inspiration, we?ve written the most comprehensive ecommerce optimization checklist available on the web (or anywhere, for that matter). Download it for free now!

Want more insights like this?

Get weekly ecommerce tips, strategies and leading industry knowledge.Delivered right into your inbox.

onI have read the privacy policy and I accept newsletter terms and conditions.

Please select this checkbox to continue

Woohoo! You’ve just signed up. Check your inbox to confirm the subscription.

Top 11 Ecommerce A/B Split-Testing Mistakes to Avoid”

When not done correctly, split-testing can be a colossal waste of time and money.

Avoid making the following mistakes:

  1. Split-testing pages that don’t affect conversions – There’s no use in split-testing pages that don’t affect conversions in a significant way. With limited time and resources, it’s crucial to research and prioritize the best candidates for testing.
  2. Split-testing multiple elements in one test – If you run tests with multiple elements, you have no way of knowing which variations are responsible for positive results. This negatively affects your ability to formulate hypotheses going forward and is also likely to lead to less-than-optimal results for the pages you ran the tests on.
  3. Using a small sample size – If you don’t adhere to good data science – calculating a sample size with statistical significance of between 80% and 95% – your results will be inconclusive. Over the long-term, this will more likely than not lead to negligible changes to your goals.
  4. “Borrowing” all your testing ideas – Competitor research and the use of case studies to inform your hypotheses is good practice. It’s a mistake when you only generate testing ideas from them. Many of your best results will likely come from tests that your competitors haven’t conducted.
  5. Sporadic split-testing – As the old saying goes: split-testing is for life, not just for Christmas. For the greatest conversion gains, and for a strategy that is able to adapt to shifting consumer behavior, testing should be conducted in a sustainable manner over the long-term.
  6. Lack of separation between design and development processes – There should be a clear distinction between tasks when it comes to brainstorming ideas (design) and implementing them (development and coding). Often, retailers will confuse these roles, resulting in either ineffective brainstorming or shoddy implementation. Even if one person is responsible for both jobs, it’s essential to ensure they have the appropriate skill sets.
  7. Basing hypotheses on hunches and assumptions – Every split-testing team will have a set of assumptions about what makes a “good testing idea”. But it’s important to be as open-minded as possible and create hypotheses that might seem counter-intuitive. The whole purpose of split-testing is to identify positive original changes. Processes should challenge underlying assumptions as much as possible and encourage designers to think outside the box.
  8. Failure to form proper hypotheses – It’s important to know the reasons behind positive changes. If you generate ideas without any forethought, you’re putting yourself at a disadvantage. Understanding the basis of successful outcomes enables you to formulate a clearer understanding of the behavior of your customer base over time and generate solid hypotheses going forward.
  9. Inadequate analysis of results – So CTA “B” converts at 10% while CTA “A” only converts at 5%. That’s the end of the story, right? No! Test data holds useful insights about customers, including information about high-converting segments, peak conversion times, on-page obstacles, and more. Use an analytics platform like Google Analytics to really drill down into test results.
  10. Overlooking small gains – Retailers often expect massive results and often discount 2% or 3% changes as insignificant. In a way, this is understandable. The preponderance of ultra-successful case studies on the web has conditioned us to try and mirror the same results. But this is a mistake. Small increases, when they have robust statistical significance, are just as valid as larger results. Tests that have high statistical power can detect a small effect and are all equally valid.
  11. “Peeking” into results – Stopping split-tests prematurely (before you have attained your desired number of tested users) is a big no-no. Often, testers will conclude the efficacy of one variant over another based on results mid-test. When you do this, you ignore variance that can manifest over the course of a test, and it’s common for variations to arbitrarily outperform each other at certain times.

    A Review of the Best A/B Split-Testing Tools for Ecommerce

    A/B split-testing should encompass most aspects of your marketing and sales activities. It shouldn’t be limited to your site. Most dedicated apps, such as those for your email marketing, Facebook advertising, social media, and so on, will come with their own A/B split-testing tools.

This list outlines the best tools for running split-tests on your ecommerce site. Furthermore, there isn’t an all-out “best” tool when it comes to A/B split testing. Different solutions are designed for different types of online stores, and the best choice of software depends on a range of factors, including size, industry, preferred marketing methods, and more.

Here’s our rundown of the top five ecommerce A/B split-testing tools:

  1. VWO VWO is one of the most popular ecommerce tools on the web for conducting analysis, developing new ideas, and running tests. As a platform, it has all the features needed to run optimization campaigns and is very versatile – with a range of options for enterprise companies and smaller businesses (and everything in between). VWO includes eBay on its client list.
  2. Optimizely – Another big name in the international ecommerce space, Optimizely is a favorite among “big name” online retailers. The software includes a powerful package of features for conducting A/B tests, allowing for segmentation of samples, forecasting, targeting, and analysis. It’s perfect for use on both mobile and desktop.
  3. Google Optimize – One of the big selling points of Google Optimize is its seamless integration with Google analytics, although “selling-point” is perhaps the wrong word since it’s free. Optimize is a full A/B testing platform and has its own visual editor. It’s found a large following mostly among smaller companies, which is understandable given that it lacks many of the enterprise-level features of competitors. There is a paid version, Optimize 360, which users can upgrade to at a later date.
  4. AB Tasty – AB Tasty has been designed for larger enterprises and comes with a full set of testing tools, including a feature-rich analytics platform, visual editor, and automated implementation functionality for running tests.
  5. Swiftswap – We couldn’t compile a list of the top testing tools without including Growcode’s software, Swiftswap. What makes Swiftswap unique is its use of AI to inform and streamline the testing process. It integrates with all ecommerce platforms. It’s also designed to deliver fast and consistent optimization changes to ecommerce stores and is available as part of Growcode’s outsourced optimization package.

Examples of Ecommerce A/B Testing Case Studies

So what does an A/B split-test look like in practice?

Here are three examples from Growcode’s own case files:

1. Budapester

Budapester is a large online retailer that sells designer bags, shoes, and accessories. The company wanted to implement a long-term testing plan that was cost-effective. Analysis showed that product pages and the shopping cart had the greatest potential for improvement.

Read the full case study here.

Result: Conversion rate increased by 12.5%.

The following hypotheses were formulated and tested:

Hypothesis one: clearer communication of the USP on all pages would boost conversions.

Before: The USP, which includes free shipping and immediate product availability, was not shown on product pages.

budapester before a b test

After: The USP was included below the product description and in the header.

budapester after a b test

Hypothesis two: the header was taking up too much space and distracting visitors with unnecessary links and information.
Before: The header was unclear, with lots of small buttons, hard-to-read text, and unnecessary links.

budapester before a b test

After: The header was simplified and the main buttons were made clearer.

budapester after a b test

Hypothesis three: a streamlined shopping cart would reduce cart abandonment.
Before: On the purchase confirmation page, information about free delivery was not shown and discounted prices were not highlighted.

budapester before a b test

After: Free delivery, availability, and discounts were all included in bright colors to make them noticeable.

budapester after a b test

2. Reserved

Reserved is the biggest fashion retailer in the CEE region. The online store was launched in 2013.

Result: 4.6% conversion rate increase.

Read the full case study here.

The following hypotheses were formulated and tested:

Hypothesis one: Adding the USP to main pages – home page, product pages, and category pages – would help to persuade visitors of the unique benefits of shopping with Reserved.

Before: No clear USPs were displayed on the home page.

Reserved before ab tests

After: The USPs were displayed on the homepage just underneath the header.

Reserved after ab tests

Hypothesis two: Including the USP on shopping cart pages would reduce cart abandonment.
Before: Certain USPs were shown, but they were not clearly explained. Information about free delivery and free courier delivery on purchases over $50 was not shown.

Reserved before ab tests

After: A section displaying information about USPs was included on the right of the page.

Reserved after ab tests

3. 4F

4F sells sportswear and sports accessories. The company has built a reputation for quality – mixing traditional manufacturing processes with modern designs.

Result: 8% global conversion rate increase.

Read the full case study here.

The following hypotheses were formulated and tested:

Hypothesis one: Including detailed descriptions on product pages will alleviate doubt and prompt more visitors to add products to the cart.

Before: Product information was scattered, difficult to scan, and far away from the CTA.

4F before ab tests

After: Product details, including delivery information, were written to be scannable and placed next to the CTA.

4F after ab tests

Hypothesis two: Showing discounts as a percentage will prompt more customers to add products to the cart.
Before: The discounted price was struck-through and shown next to the current price, with no further information.

4F price before ab tests

After: A figure showing the discounted price as a percentage was included next to the current price.

4F price after ab tests

Hypothesis three: Showing information about in-store delivery would boost conversions because it is highly relevant to customers and 4F has a well-known chain of local stores.
Before: Information about store delivery was quite far down the page.

4F delivery before ab tests

After: Direct shipping and in-store delivery details were displayed next to each other above the CTA.

4F delivery after ab tests

As you can see, most of the page elements that were tested are fairly generic A/B testing examples. Despite the fact that they might seem “safe”, they can still drive significant boosts to conversion rates.

Conclusion

Armed with the information outlined in this post, you can start to conduct tests that drive real results and move you closer towards your conversion and revenue goals.

But there’s an important point to keep in mind.

Don’t forget the importance of continual and consistent split-testing.

Implementing an optimization campaign that involves making lots of small changes over time will place you well above your competition. It's the strategy that big players like Amazon use. Click To Tweet

Implementing an optimization campaign that involves making lots of small changes over time will place you well above your competition. It’s the strategy that big players like Amazon use to achieve conversion rates well above the industry average.

By the way, if you’re looking for testing inspiration, we’ve written the most comprehensive ecommerce optimization checklist available on the web (or anywhere, for that matter). Download it for free now!
ecommerce optimization checklist ebook

Why Not Get in Touch With Growcode?

If you would like an experienced team to take over your optimization strategy, why not get in touch with Growcode? We have years of experience running split-tests and can implement a long-term strategy at a fraction of what it will cost you to manage yourself. Plus the results are guaranteed. If we fail to deliver them, we will give you a full refund.
Read about our unique, hands-free approach here.


Growcode Ecommerce Blog / Ecommerce Optimization / The Ultimate Ecommerce A/B Testing Guide: Strategy, Tactics, Tools, Data Science and Case Studies

Leave your comment