OK, I get it. The title seems to be clickbait heaven. And I’ll admit it, the minute we saw the results of our experiment we thought that this would make a great case study. Because let’s face it, what’s better than such a dramatic increase in ROI when you’re trying to convince a client that you’re the right agency for them?
The good news is that there is a hell of a lot that we can learn (and teach) from the experiment, but we can’t promise that you’re going to get a 3,000% increase in ROI every time you use this method. Our client’s setup was so dire that they were only barely breaking even with their ad spend, so our starting point was so low that we could hardly have done any worse.
It got better. Much better:
And this is the story of how we did it.
Some Background: ROI / ROAS
Before heading any further, however, you need to be familiar with the concept of ROAS (Return on Ad Spend), which is the main component we include when calculating ROI. We will be referring to it as ROI because we tend to include other costs in our calculations (like our fees), but the basics are the same.
Some Background: Our Client
We can’t divulge who our client is, because we’d be giving too much away. We can’t even give the industry they work in, since we only work with one client in that industry, but what we can tell you is the following.
Our Client:
- has attribution for digital sales is as close to watertight as possible (this made our life much easier, because we could see direct effects of our work)
- has three sales channels – an iOS app, an Android app and their website (which gave us more to test with)
- operates in a highly competitive market, with incredibly fierce competition (which means that, as the competition realised what we were doing, our life became harder)
- trusted us to do the right thing, because we set a target that they would be happy with and put 100% of our fees on the line if we didn’t reach it (this trust was critical for success, this method requires steady hands).
On Fundamentals
I’m going to talk about fundamentals throughout this post, so I’m going to spend a minute or so getting the definition out of the way. I’m not sure if this is the correct way to refer to it, or if the meaning is universally understood, but for you to understand this post we must be on the same page.
A fundamental is a single decision that our team agrees is the best way to do something to reach a specific goal for a specific ad account/ client.
An example fundamental discovery could be that (with all things being equal):
- For conversions
- For Client A
- On Facebook ads
- A video ad will give higher ROI than a static ad.
This is something that we would test again from time to time, and it’s not a rule that we can apply to all situations, which is why we make it a point to test fundamentals every time we start working with a new client or a new audience.
Our Method
Before diving into what we did to grow our client’s ROI, I’m going to give away the secret recipe. The rest of this post will be useful to help you get some context, but all you should need to do is follow this simple set of steps, steps we borrowed from the scientific method, and, to a certain extent, it’s just the basis of every A/B test in history, only more aggressive.
The main difference to what we usually see with A/B testing is that we go all-in, heavily, with multiple tests that only test one fundamental question at a time.
Here are the steps (we’ll go into more detail further down):
- Set your objectives and KPIs. This method only works if you have goals that you can measure objectively, ideally rapidly.
- List a set of fundamental questions that you want to answer about your ads, your ad account, your audience, and/or your targeting methods.
- Set up a calendar of fundamental tests that you want to run, and slot them into an order that is, based on gut feeling and experience, with the tests you expect to see the biggest differences in first.
- Set up the test for your fundamental question.
- Run the test. Aggressively. Put all your budget for the period (whatever makes sense for your cycle) into an A/B test for that question
- Document learnings, shift the rest of your budget into the winning test & go back to step 4
- Cycle steps 4-6 until you’ve tested all your fundamentals, then measure the performance of your new and improved ad account.
It seems simple, and when you get down to it, it is. That’s the beauty of it. This is a method that will yield incredible results as long as you can get good enough incremental gains that build on top of each other.
What happened before us
I’ve already admitted it, but what we found was a mess. The client had a single internal resource managing all their marketing and ads, and they were doing their best to hold onto their role by obfuscating their work. This meant that we could see the results of their work in ROI from Appsflyer (which was set up correctly by the client’s excellent tech team), but we could not learn anything from their previous spend.
The ROI of the client’s Facebook campaigns, however, was abysmal. For every €1 the client was spending, they were getting around €1.32 in revenue. We did not have insight into what the executive was thinking, but we could see that all their ads were optimised for engagement, when what they were measuring was revenue.
They were running the same few creatives with an incredibly broad audience for a very long time, the calls to action were misleading (get your calls to action right by reading this post) and the ad set contained various ads that should have been running separately.
Step 1 – Setting your objectives and KPIs
It’s critical that you start off by choosing what your goals are and get complete buy in from all the stakeholders who could influence the progress of this experiment. You’ve read about SMART goals forever, but this is the time when you should really apply this reasoning. Unless you can measure the success of your tests, you’re going to be wasting time and money.
Your tests might be running on different metrics (you might be measuring click-throughs on one specific fundamental test, even if your final KPI is ROI), but you need to be absolutely certain that the metric you’re measuring has a direct impact on your final KPI.
In our campaign we were all working towards one KPI – we wanted to take our ROI from 1.3X ad spend to a minimum of 13X ad spend.
Step 2 – List a set of fundamentals
This is the critical part of the equation. Every business, and therefore every ad account, you work on will have a different set of fundamental tests to run. Take the time to choose the right set of tests, because every one of these tests is an expensive endeavour, and you want to run tests that leave no doubts. In our case we were testing quite a few fundamental hypotheses for the account, and I’m listing them here because they could serve for inspiration:
- Which Facebook goal works best?
- Engagement ads vs Landing page views
- Winner of the above vs Conversion
- Do ads work better if we split conversions by target platform (app conversions and web conversions instead of the generic conversions campaign setting)?
- Which bidding strategy works best?
- Does automatic placement give you better results than manual placement?
- Does an app install campaign give you better LTV clients than a simple conversion campaign?
- Which CTAs work best with your audience?
- Do you sell more when you advertise a product or when you advertise categories?
- Does a specific price/discount work better than another?
- Does it make a difference to our campaign if we only run ads at specific times of the day (our client runs a 24/7 operation)?
- Specific manual placements vs the rest
- Specific audiences vs the rest
- We also tested some creative choices:
- Which of the two main USPs gave us the highest ROI?
- Do static ads work better than videos?
Whatever you do, don’t copy this list. Think long and hard about the tests that you should run for your business in your market, with your business goals.
Step 3 – Build a calendar of tests
Set your tests up in a calendar in the order in which you’re planning to test the fundamentals. This removes all the decision making from later in the process, you want to be absolutely focused on testing, and nothing else.
Also take this time to set your thresholds for each test. How will you know when one of the options has won? This will allow you to skip to the next test as soon as you’re certain that one option is much better than the other
Set a first-past-the-goalpost threshold with a percentage. If you’re going to measure on 200 conversions, for example, you can agree internally that if one of the variants hits 100 when the other is still below 60, then you can decide that the win is emphatic enough, there’s no need to see if the next 40 sales will change the course of the test. They won’t.
Step 4 – Set up the test for your fundamental
Lights. Camera. Action! This is where the fun parts begin. Set up your campaigns as you would set up any other A/B test – create identical campaigns with only one difference, the one thing that you’re testing.
Don’t call us luddites, but we like to test manually. Set up different campaigns and actually run them independently. We didn’t let Facebook run our tests for us, because we’re not always certain that there isn’t some other algorithm at play.
Step 5 – Run the tests aggressively
This is where client (or boss) buy-in becomes critical. It’s very easy to discuss this concept theoretically, but it’s much harder to do when you are spending actual money to run the tests. We were lucky enough that we were departing from a position that was abysmal, so when, after the first three days of testing we got 500% more ROI than we had started off from, it became much easier to convince our client to back our method.
The method requires aggression because it relies completely on a rapid succession of compounding returns. It might seem drastic, but the most you are risking is the value of half your budget for a period of time.
Which brings us to the next question – how long is long enough to run a test? This will change from business to business, and it will probably depend most on the size of your budget. If I’m running a small B2B site whose goal is to generate 10 leads a month with a budget of $200 a month, then I’d say you would probably need a few months to be able to test each fundamental. Anything less than 50 results (whether they’re leads or sales) is going to be too little to draw a meaningful conclusion from.
If you’re running an account with a couple of thousand dollars a week for 500 sales a week, then you can probably run a test or two every week. Only you can really tell how much data you need for your test to be meaningful, but don’t skimp on this process because you’ll be reaping benefits in the long term that far outweigh the cost of running the test.
By putting all our budget behind these tests we were able to run the tests even more rapidly than we had anticipated, and over a period of 6 weeks we ran over 10 tests, usually running two tests in a week when we decided that we had enough data to move on to the next one.
Step 6 – Learning
The key to running successful fundamental tests is that you learn from them. By having emphatic winners we can be absolutely certain that the decision was made on solid data, so we didn’t really have to go back to run a full test on that fundamental for quite a while.
Step 7 – Cycle steps 4-6
By building the fundamental learnings of the account into a playbook for the client, we got to a position where we took the account from 1.3X ROI to 44X ROI in just 4 weeks, an improvement of over 3,300%. By the end of a 6-week period of testing we were ready to ramp up the client’s budget, the final test to check whether the results are scalable, and that is where we settled on the perfect balance for the account, a return of around €37 for every €1 spent.
So we made a list of fundamental rules for the ad account that the team follows every time we need to set up a campaign. Nowadays we still run fundamental tests on a monthly basis, either to test new hypotheses or to confirm that the original test results are still valid.
Over the next year or so the client was hit heavily by Covid, however, as we scaled back budgets, the fact that we had all our fundamentals sorted out in the original tests meant that we could scale back budgets in the areas in which we knew the client would suffer least. At no point in the pandemic did the client fall below a return of €15 for every Euro spent, and now they’re hovering happily at around €80 in revenue for every Euro spent on ads.
This is what the ad account looks now, with peaks and troughs affected by seasonality, but a healthy ROI throughout:
A note about cautious clients or bosses
Our client was super comfortable with taking rapid risks in succession. To be fair, the benefit was so great after just the first test that they had nothing to lose. Having said that, if you need to offer some stability, what I’d recommend is to run the tests at full budget for the period that you need but then allow the winning version to run for a few more weeks before testing again.
Richard Muscat Azzopardi is an experienced marketer and the CEO of Switch – Digital & Brand, a boutique brand and marketing agency that loves partnering up with clients who get it. He’s been working in digital marketing since the inception of the internet, even though he’s not as old as you’d think he is. He’d love to connect with you on LinkedIn!