Is your Media Mix correct? Find out how much revenue you're leaving on the table at your current spend with our free audit.
Is your spend allocation correct? Find out how much revenue you’re leaving on the table with our free audit.

Boom or Bust? Using Split Testing to Measure Your Facebook Marketing Performance

January 9, 2017

You’re beaming, proud, and ready to rake in a massive amount of leads. Why? Because—you’ve got two brilliantly designed sets of ad creative and you’re ready to set ‘em loose to the hungry, scrolling consumer masses.

How do you know if your campaign will be a boom or bust? Can you even test such a thing in an easy and straightforward way?

Time for some split testing....

Some What?


For those who haven’t yet implemented split testing to increase conversions, an explanation is in order. Simply put, split testing (also known as A/B testing) allows you to test different advertising strategies on commonly divided audiences to see what works and what doesn’t.

Want to see which bidding option, creative, or ad placements perform best? Split testing is the answer.

How Does It Work?


The Facebook split testing API does several great things:

  • Automates audience segmentation
  • Ensures there’s no overlap between segments
  • Allows you to test variables like bidding type, ad placement, different ad creative, and more
  • Takes away the hassle of manually building unique audiences and running your test campaigns independently


Where to Start and How to Run with It


First, let’s start with a simple example. Let’s go back to those two stellar ad creatives. At this point, of course, you don't know which one’s going to perform better. The first thing you should do is set up your two ad sets, with each one of your creatives in each ad set (in other words, one ad in each set for a total of two ads). Keep the copy the same for each ad.

split-test-sample



For the purposes of this example, then, our plan is straightforward:

  • Create our two ad sets
  • Target the same audience
  • Split test to see which one performs best


To run the split test, you’ll need to set it up in Facebook. (If you’re a Marin customer, contact your account manager for help with this.) The test can be 50/50 or 33/33/33, etc., depending on the testing variables, but note that 50/50 is the most commonly used model. So, if an audience has 10 million people, the ad sets will have 5 million people in each audience.

From here, we select the image as the variable to test. Our main KPI is conversions (downloads), and we’re allocating $5,000 per ad set. As we mentioned above, our audience is 5 million per ad set. We’ll run the campaign for two weeks to ensure we have a broad reach, high budget, and long duration.

Since we want to see positive results before we extend our campaign to other markets, we’ll start only in the UK first.

Ready, set, test, measure.

How to Scope a Test That’s Right for You


When scoping your own split test, make sure that the test will have value for you, and that you’ll see clear results that you can use to refine and improve your campaigns.

The first questions you should answer include:

  • Which ad account are you planning to use?
  • What are the campaign timelines, including start and end dates?
  • What’s the budget, broken down by test group?
  • What variables would you like to test?
  • What’s the campaign objective and the main KPI?


Analyzing Results


Back to that riddle—is it a boom or a bust? To determine which test worked best, choose the variable that has the highest efficiency level based on your objective.

In our example, our objective is conversions and the main KPI is downloads. So, we can consider the ad set that has the lower CPA as the best performing.

And there you have it. Easy, right?







































Best Practices and Recommendations for Maximum Success


If you’d like to dig deeper (and we recommend that you do), here are a few best practices.



Define an acceptable confidence level



Before you create a test, determine an acceptable confidence level. Test with larger reach, longer schedules, or higher budgets.



Choose one variable to test



This allows you to define the exact difference in ad creative that drove better performance.



Define main KPIs before the test



This will allow you to determine the best performing variable.



Ensure both test sizes are comparable



When testing for volume metrics such as number of conversions, scale to ensure both test sizes are comparable.



Start testing on one specific market or campaign



This will allow you to monitor and analyze the test results more efficiently, which will in turn allow you to draw better conclusions. If you find this useful, you can conduct further tests on different variables and expand to other markets.



Test based on one large audience



The audience should be big enough to be split and to allow you to gain sufficient insights.



Allocate the same budget to the test groups



If you’re running your splits at the campaign level, make sure both campaigns have the same lifetime budget. If you’re testing on the ad set level, both ad sets should have the same lifetime budget.



No changes to the test groups



Any changes could compromise the split testing and prevent you from seeing clear results.


Jana Christoviciute

Marin Software
By submitting this form, I am agreeing to Marin’s privacy policy.

See why brands have relied on Marin to manage over $48 billion in spend