Are You A/B Testing Your Direct Mail? You Should Be!

In any marketing campaign, you should be testing the results of what you're putting in front of your customers. If you don't test and you don't analyze the results, you could keep making the same mistakes—and that affects your bottom line. A/B testing is a great way to quantify what works for your target audience and to learn what could be working better.

A/B testing involves sending out two variations of your direct mail postcards. Some A/B tests involve subtle changes, while others may involve radically different versions. By testing different variables, you can massively increase your response rate and profit margin.

In 2018, direct mail’s average response rate was at 9% for house lists and 4.9% for prospect lists. How do your mailings measure up to this benchmark? How could they be performing better? A/B testing can help answer these and other questions.

Types of A/B Tests

Single Variable Test

This is what most people think of when talking about split-tests. In a single variable direct mail test, you create one mail piece (the "A" piece), duplicate it, and change one item on the "B" piece. Everything except the test variable needs to remain identical. You then randomize and split your mail list in half. In most cases, a single variable direct mail test is the ideal method because it is clear why your audience reacted a certain way.

Control Group Test

In this type of test, your “A” mailer is one that you have sent before. “B” can be completely different than “A”, with many variables, or you may still decide to test a single variable for better clarity in results. You will split your list in half and see which one gets the better response. This works great if you are considering a new format and want to see how well it is received.

Remember to make sure that each half of your mail list only gets one mail piece (“A” or “B” only) during the test period. If they receive both, you'll never know what caused them to respond.

Step One: Set Your Goal

Before you begin your A/B testing, set a goal that you want to reach. Take a look at your data from previous campaigns and see where you'd like to improve. Do you want a higher ROI on your mail piece? Do you want a higher response rate? Was your conversion rate too low?

Without data from previous campaigns, it’s harder to determine which tests you should perform, but you can still develop an A/B test to get started. Start with a single-variable test that you can easily track, and this will be your control group for your next campaign.

Variables to Test

There are many variables you can test by using the A/B method, including:

  • Audience. (More on this below.)
  • Offers. For example, you can measure whether a dollars-off discount performs better than a percentage discount.
  • Copy. You can test different headlines, emphasis on some benefits (like price or quality) over others, or even different ways of saying the same thing (50% off vs. 2-for-1).
  • Design Elements. You can test many different variables within the design! Graphic elements vs. photographs, different colors... the list goes on.
  • Medium. What works better, a postcard or a brochure? You can find out, and use that data to guide future campaigns.

Step Two: Segment Your List

Statistically sound A/B tests require randomness. You’ll have to create randomized test groups in advance. If you can export your customer contacts into a CSV file, you can easily paste your data into a randomizer tool, like this one at random.org. Then, copy the reordered data and paste it back into the spreadsheet. You now have a randomly ordered list of your contacts that you can split neatly in half. Cut and paste half of your spreadsheet’s rows into a new CSV file and save the other. You now have two CSV files with an equal amount of random contacts.

But what if the audience is your variable for the test? You can use exactly the same mail piece but use two separate lists. For example, the lists might be households within two specific income brackets. When your results come in, you'll be able to see if one list was more effective than the other. This is a great insight into the purchasing power needed for your product.

Step Three: Develop Your Mail Piece

Step Four: Track Results

To compare your conversion rates, create a unique coupon code to track when orders come in. Or, create a custom URL on your website for each test offer and check your website analytics for results. Make sure you include an expiration date on your offer so that you know when to calculate the test results.

Once you have your results, you should be able to see which version had more responses and was therefore was more effective.

Understanding Statistical Significance

The more pieces of mail you send as part of the test, the more reliable your test results will be. For example, if you test over 10,000 mailings, the results will be more reliable than if you test 2,000. That said, you need to make sure your test results are statistically significant. Use an online calculator to see how meaningful your results are.

Step Five: Continue To Test

A/B testing can identify opportunities for you to improve future campaigns. As such, it's an ongoing process that should be done on all your campaigns to build up your statistics.

You also need to realize that your customers’ wants and needs will change over time. Always keep testing and bringing new ideas to market.