As a marketer, you’re often running tests to find which subject line gets the most email opens, what homepage content improves your search engine optimization (SEO), what button color leads to more purchases, and so on.
Sometimes however, you need to answer a different type of question:
“What can I do to cause an incremental improvement versus doing nothing at all?”
Let’s look at an example. Say you wanted to run an email campaign with a 20 percent discount offer to customers who are likely to want to buy knitwear. To determine whether or not this campaign is successful, you ultimately need to know how many people made a purchase from this campaign that wouldn’t have done so if they never got the discount email.
If 80 people buy a new knit top, but almost all of those people would have done so anyway (except now they got a nice 20 percent off coupon), then you’ve actually hurt your company by giving discounts where they weren’t needed.
However, by properly designing and executing an experiment with a control group (or holdout sample) you are able to predict with great accuracy whether or not the campaign was effective. In short, you need to take a random subset of the population and do nothing to them, and then look at what they do compared to the group that received the treatment (an email campaign in this example).
While it can be hard to prioritize experiments, they can give you a better idea of the true return on investment (ROI) for your marketing, which can pay off big in the long run.
The following are five keys to effective marketing experiments.
1. Keep The Control And Test Groups Balanced
Stay disciplined about control groups — the temptation is there to shrink the control group down to maximize the earning potential of the test. The problem, however, is that without an adequately sized control group, deriving meaningful results and conclusions can become next to impossible since you don’t have a good read on baseline customer behavior.
Start out with a 50 percent control group, then cut it down as you begin to have a better sense of what works and what doesn’t.
2. Make Sure The Control And Test Groups Look The Same
Your control group needs to be the same as the test group. Not “kind of” — it needs to be identical. If you’re trying to see how a 10 percent discount works for mothers in Texas, then both your control and experimental groups need to be mothers from Texas.
To do this, you need to define your segment, and then randomly assign members to the test and control groups. Some email service providers can do this for you, otherwise you’ll need to use Excel or another tool to create the random groups yourself and then upload accordingly.
3. Eliminate Other Variables
Since you’re trying to see the impact of one idea, and only one idea, then you need to make sure that the test and control groups are otherwise identical in treatment. If one group gets a daily email about socks, then the other group needs to get that same sock-loving email; otherwise the results are tainted and therefore meaningless.
4. Keep It Simultaneous
Things change — strategies shift, seasons change, competitors rise and fall. Given these facts of business, you need to make sure that your test and control groups are being tracked at the same time. You can’t compare last month’s holdout group to this month’s test group.
5. Make Sure Results Are Trackable
The whole idea is to be able to measure the test and control groups against each other, so make sure you know what you’re tracking and how you’re going to track it.
And since the control group isn’t getting any emails, you can’t use clicks or opens to compare. (Customers can’t open an email they never received.) Look at revenue, orders, site visits, conversion rates, and so on during the test period to see whether the campaign did well enough to merit further investment.
To Recap
To measure the real impact of your next marketing campaign (and see its true incremental effect), make sure to have a holdout group that doesn’t receive the campaign so that you can properly compare results. Happy testing.
Comments