All Collections
Send
Broadcasts
How to A/B Test Subject Lines
How to A/B Test Subject Lines

How to A/B test the subject lines of your broadcasts, the reports provided, and answers to frequently asked questions.

Updated over a week ago

Knowing your subscribers is a key ingredient to effective email marketing.

A/B testing is a great way to gain this knowledge! At ConvertKit, you can test different subject lines for your broadcasts, and we'll automatically determine the winner based on their respective open rates.

How does A/B testing work at ConvertKit?

A/B testing is optional and allows you to test two broadcasts subject lines against one another, within a subset of your recipients. We'll then automatically send the winning variant to the remaining recipients.

The Process

  • We'll send each subject line to 15% of your recipients (30% of your total recipients)

  • After a four-hour testing period we'll determine the winning subject line based on which one resulted in the higher open rate

  • We'll then automatically send the winning variant to the remaining 70% of your recipients (those who were not involved in the initial test)

☝️ NOTE: Make sure that you take into consideration the four-hour test period before running an A/B test! It might not make sense for time-sensitive emails where four hours would be too long to wait for the full send.

How to set up an A/B test

To set up an A/B test for a broadcast, just click the A/B symbol next to the subject line:

This will allow you to input your two subject line variants for this email, which will be tested against one another to see which results in the higher open rate:

Adding two subject line variants is all you have to do to set up an A/B test! You can then send the broadcast as normal and the process outlined above will take place automatically β€” no further action required on your part (except checking back to see which was the winner, if you're curious!).

Who should (and shouldn't) A/B test?

We only recommend A/B testing for emails going out to 1,000 recipients or more. A test run on a smaller number will not give actionable data, as the percentages and numbers will not be statistically significant.

For example: if you send your broadcast to 100 recipients, then each subject line variant would go out to 15 people (15% of recipients) for the initial test. Even if your email has a high open rate of ~50%, this still only means seven or eight subscribers will likely open each variant.
​
The result? That 'winning' subject line could be determined by the whims of a single person. One individual's decision of whether or not to open an email will not provide meaningful insights to apply to your greater list.

A/B test reporting

If you run an A/B test for a broadcast its reports page will have some extra statistics available.

Here's a breakdown of what you'll see stats-wise for an A/B tested broadcast:

Let's say this broadcast was sent to 1,000 recipients total. This means that each variant will have been sent to 150 recipients (15% of the total recipients list) for testing.

The aggregate stats section includes the stats for all 1,000 recipients, including the 300 recipients that received the test variants.

The results for variant A /B are the stats isolated for the test groups only (i.e. only 150 recipients each).

NOTE: Aggregate stats won't be available during the four-hour testing period, because the winner hasn't been sent to your larger list yet.

FAQs

Where is A/B testing available in ConvertKit?

A/B testing is only available for the subject lines of broadcasts at this time.

Can I manually end an A/B test early?

Yes! You can cancel the test at any point. πŸ‘‡πŸ½

What happens if I choose to cancel my A/B test?

Should you cancel your A/B test before it completes sending, it will behave like any completed broadcast. This means it will show you the stats for the broadcast, for the portion that was sent (30% of the recipient list). The recipient count will then update to show the small group of recipients.

Why does the winner of my A/B test have the lower open rate?

At the four-hour mark of an A/B test, the system will automatically send out the subject line variant with the higher open rate at that point in time to the remaining recipients.

That winner will receive this badge in the broadcast's reports:

​

After the initial four-hour testing period, the original test recipients can (and likely will!) continue to open the original two test-sends. The isolated stats for each test-send will continue to be updated even after the test is 'over' and a winner has been determined. In some cases, what was the losing variant at the four-hour mark can end up overtaking the winner later on.

Once the testing period is over and the original winner, (the one which had the higher open rate at the four-hour mark), will have already been sent out to the remaining recipients. This winner will continue to display the 'Winner' badge, even if the other variant overtakes it later on.

Why does the winner of my A/B test have a lower click rate?

Click rates do not factor into A/B testing β€” only open rates. We display the click rate per variant for your own reference; however, it will not factor into which subject line wins the test.

​

Did this answer your question?