Skip to main content
All CollectionsFunnels
Split or A/B Testing Guidelines
Split or A/B Testing Guidelines
Updated over a week ago

Why Should You Conduct A/B Testing?

Split testing, commonly referred to as A/B testing, is a method of testing through which marketing variables (such as copy, images, layout, etc) are compared to each other to identify the one that brings a better conversion rate. In this context, the element that is being testing is called the “control” and the element that is argued to give a better result is called the “treatment.”

Running A/B tests in your marketing initiatives is a great way to learn how to drive more traffic to your website and generate more leads from the visits you’re getting. Just a few small tweaks to a landing page, email, or call-to-action can significantly affect the number of

leads your company attracts.

Such increases in lead generation can provide a huge competitive advantage for your company. In fact, chances are your competitors aren’t doing A/B testing right, giving you more room to grow and create better content that converts.

In other words, if you’re not A/B testing, you’re missing out on opportunities to increase your conversion rates and glean learnings to improve your marketing content over time.

Split Test or A/B Guidelines

There are certain guidelines you should keep in mind before you implement your tests. In this section we will cover some best practices that will make it easier to measure your results and find out which variation performed better. Such knowledge will guide you in figuring out how to best optimise your landing pages, calls-to-action, and email.

1. Only conduct one test (on one asset) at a time

Let’s say you have a new offer coming out that’s promoted via an email that links to a landing page. You might decide to test the audience segment you’re sending the offer and you might also be interested in testing which landing page image improves

conversions.

However, if you conducted both tests simultaneously, you’re actually muddling the results. How will you know which change ultimately impacted the conversion rates? Maybe it was the audience, maybe it was the image, or maybe it was both! But if you test one hypothesis at a time, you’ll have results that will lead to stronger conclusions.

2. Test one variable at a time

In order to evaluate how effective an element is on your page, call-to-action, or email campaign, you have to isolate that variable in your A/B test. Only test one element at a time. For example, don’t test the landing page image and the copy on the page and try to

run your test. Like we mentioned in the previous tip, it’ll muddle your results. Note that by testing the entire, email or CTA as the variable, you can achieve drastic improvement. That said, you may not be able to pinpoint which changes caused that improvement.

3. Test minor changes, too

Although it’s reasonable to think that big, sweeping changes can increase your conversion rates, the small details are often just as important. While creating your tests, remember that even a simple change, like switching the color of your call-to-action button, can drive big improvements.

4. You can A/B test the entire element

While you can certainly test a button color or a background shade, you should also consider making your entire landing page, call-to-action or email one variable. Instead of testing single design elements, such as headlines and images, design two completely

different pages and test them against each other. Now you’re working on a higher level. This type of testing yields the biggest improvements, so consider starting with it before you continue your optimization with smaller tweaks.

5. Measure as far down funnel as possible

Sure, your A/B test might have a positive impact on your landing page conversion rate, but how about your sales numbers? A/B testing can have a significant effect on your bottom line. You may even see that a landing page that converted fewer prospects produced more sales. As you create your A/B test, consider how it affects metrics such as visits, click-through rates, leads, traffic-to-lead conversion rates, and demo requests.

6. Set up control & Treatment

In any experiment, you need to keep a version of the original element you’re testing. When conducting A/B tests, set up your unaltered version as your “control:” the landing page, call-to-action or email you would normally use. From there, build variations, or “treatments:” pages, calls-to-action or email you’ll test against your control.

For example, if you are wondering whether including a testimonial on a landing page would make a difference, set up your control page with no testimonials. Then create your variation(s).

Variation A: Control (the unaltered, original version)

Variation B: Treatment (the optimized version which you expect to perform better)

7. Decide what you want to test

As you optimize your landing pages, calls-to-action and email, there are a number of variables you can test. You don’t have to limit yourself to testing only one color background or text size. Look at the various elements on your marketing resources and their possible alternatives for design, wording, layout.

In fact, some of the areas you can test might not be instantly recognizable. For instance, you can test different target audiences, timing of promotions, alignment between an email and a landing page, etc.

8. Split your sample group randomly

In order to achieve conclusive results, you need to test with two or more audiences that are equal. For instance, in email A/B testing each of your email variations must have as similar a group of recipients as possible. List sources, list type, and the length of time a particular name has been on a list are all factors that may cause large differences in response rates.

Your test results will not be conclusive or you may draw the wrong conclusions if you do not split your lists randomly. If you want to compare the performance of two or more lists, keep all other aspects of the design and timing identical so you get clean results based on list and nothing else.

9. Test at the same time

Timing plays a significant role in your marketing campaign’s results - be it time of day of the week, or month of the year. If you were to run test A during one month and test B a month later, you wouldn’t know whether the changed response rate was a result of the different

template or the different month. A/B testing requires you to run the two or more variations at the same time. Without simultaneous testing, you may be left second-guessing your results.

10. Decide on necessary significance before testing

Before you launch your test, think about how significant your results should be in order for you to decide that the change should be made to your website or email campaign. Set the statistical significance goal for your winning variation before you start testing. 95-99% statistical significance is usually a good percentage to aim for. Want a deep dive into statistical significance means? We’ve included an easy-to-use Statistical Significance

calculator that will tell you just how significant your results are in all of our DWY and DFY funnel packages.

Did this answer your question?