In this post, we will go through what A/B testing is, what to consider, what activities can be tested and how to use A/B testing in the best possible way.
What is A/B testing?
A/B testing, also known as split testing, is a method of testing what engages the recipient the most. The A/B test consists of two variants of the same asset (e.g. a page on the website or an e-mail) and is tested on two different randomly mixed groups. The difference between the variants depends on what you want to test, but it could be the subject line of an email or perhaps the CTA text on the button of a page on the website.
What can be tested?
You can actually test most things, but here are some examples and suggestions for A/B testing of email content:
To optimize open-rate:
- Subject line
- Use of dynamic tags in the subject line
- Preview text
- Weekday or time to send out an email
- Mailing frequency
- Sender (Personal vs. General)
- Mailing frequency
To optimize click-rate:
- The text on the CTA
- The colour of different elements such as the button
- Longer vs. shorter content
- Use of images
- Use of dynamic tags
What to consider when it comes to A/B testing?
It is important to remember when performing an A/B test not to test more than one element per test. If you test multiple items at once, you won't be able to know what either improved or worsened your results. Then it will also be impossible to continue optimizing your mailings.
Another important aspect is measurability, i.e. how will you measure? What data should be used?
It may seem obvious but often you set up a complex test and then you don't know what the result actually was.
For example: you want to optimise clicks on emails. Or? What you REALLY want is to optimize conversion, and clicks from email generate traffic to the landing page. You can make the email super click-bait and generate lots of clicks, but they won't buy. However, if we have the "right" target to go after, we can measure which variant generated the most conversions regardless of the click.
Control group
There is also something called a control group. This is when you don't send out a particular campaign or any marketing at all to a randomly mixed group. The control group can act as the B in the A/B test or simply become a third part.
With the help of the control group, you can see if the marketing you do is helping or hurting. Could it be that your control group that didn't receive a particular campaign actually converted better?
Multivariate test
Multivariate testing is another way to test and optimize conversions. The difference between multivariate testing and A/B testing is that you compare more variables and see how they interact with each other. For example, it can be done like this:
Variant A: tests button #1 and subject line #1
Variant B: tests button 2 and subject line 1
Variant C: tests button 1 and subject line 2
Variant D: tests button 2 and subject line 2
Things to consider when testing, no matter what kind of test
It is important that you do not look at the result as an answer without questioning the statistical significance of the result.
Statistical significance is the probability that the difference in results between the different versions is not due to error or chance.
To sum it up, you want the result to be statistically significant so that you know you are choosing the right version from your test to go forward with.
There are several different tools for this on the web, we usually use this one.
Questions to ask yourself:
- Which test is best for your purpose? A/B or multivariate?
- Should there be a control group?
- What to test?
- How will you measure?
- What is the aim of the test?
- What will you do with the results?
- When is the end date of the test?
- How do we calculate the statistical significance of the result?
- Do you have a good documentation of which tests have been carried out to build on your strategy instead of repeating tests of the same hypothesis?