This article is for all subscribers! Enjoy and don’t forget to share feedback at the end.
The 5 tactics
Marketing Science is the fun job of trying to prove and improve the ROI of Marketing campaigns.
Our sole reason for existence is for CMOs to come up with elaborate campaigns and tell us in the 11th hour that they want statistically accurate ROI calculations, so that they look good to the CEO.

I’m just kidding. Totally not triggered at all.
But seriously, I’ve been in Marketing Data Science for 10+ years and it is one of the most exciting jobs out there. Tests are hard to set up, problems are complex, and you’re working on things that touch the customer.
Below, I’ll share the 5 types of analysis I’ve used to measure > $1.5B in Marketing spend and generate $180M+ in revenue for companies like Uber, DIRECTV, Otrium, and others.

The types of analysis are plotted on a scale of 1 (least) to 5 (most) for both accuracy and complexity.
The goal is to be in the top right corner but as you’ll see that’s never the case.
Buckle up kids! Here we go

IYKYK.
Definitions
Complexity → The complexity of a methodology refers to how difficult it is to setup and analyze. There is an added layer of complexity when it comes to communicating results based on the methodology as well but I didn’t take that into the score.
Accuracy → The accuracy of a methodology refers to how statistically accurate it is and how much confidence you can place in the results.
Campaign → I’m assuming you understand what a campaign is but you may see it referred to here also as an intervention.
Metric of interest → The metric of interest aka primary metric is the metric you want to influence with your campaign.
Pre period → aka Pre is the period of time before a campaign starts. It’s usually defined in # of weeks.
Post period → aka Post is the period of time after a campaign starts. It includes the period of the campaign itself too.
Pre/Post
1/5 complexity, 1/5 accuracy
The Pre/Post methodology is named that because it compares a metric from the pre period of a campaign (Pre) to what happens in the post period (Post).
It’s a simple method that has been around since the dawn of time.
Example

Let’s look at the chart above.
In this example, they wanted to measure the impact of a test on “Mean Knowledge Score”.
They survey a group of students before the test (blue).
They survey a group of students after the test (red)
They compare the results.
The “Mean Knowledge Score” has gone up from before.
How to set it up
Pick your metric of interest
Understand how long your campaign is going to run
Create a baseline in the Pre period (usually the length of your campaign)
Launch your campaign
Calculate the metric of interest in the Post period
Compare the difference
Drawbacks
The biggest challenge is that it’s very hard to isolate the impact to your metric of interest from just your campaign.
In today’s world there are so many factors:
PR
Seasonality (the ultimate scapegoat)
Competition
Pricing changes
Product changes
Algorithm changes
Macro factors like war, economy, etc.
With the emergence of more digital channels and social media, your campaign and company can explode or implode within days.
You might think ”I’ll keep looking back longer in the Pre period to stabilize the data set”.
You’ll end up just spending more time explaining through factors.
It’s just really tough and when you present the analysis people from random teams will chirp in with “Have you thought about this”. You couldn’t have thought of everything.
When to use it
If you’re a startup that understands its seasonality and only has 1 to 2 channels without any other tools.
Even then, you have to make sure you’re not shipping any major product changes or increasing budgets in your other marketing channels.
If you don’t fit the criteria above, don’t use it.
Please.
Just please don’t use it.
Actually, I might just simplify to NEVER USE IT.
Diff-in-diff
2/5 complexity, 2/5 accuracy
Diff-in-diff stands for Differences-in-differences.
It is the slightly more mature and responsible version of a Pre/Post.
It has guardrails and an extra set of data that increases confidence.
It’s not complex and is more accurate but as you’ll see it has a lot of pitfalls.

The methodology is the name and well explained in the chart above.
Example
You want to launch a campaign in Paris and are asked to estimate the impact.
For your business, London and Paris behave very similar.
You look at the 6 weeks before your planned campaign start and observe that the 2 cities act similarly and that the gap between London and Paris is fairly constant.
You then launch a campaign.
How to set it up
Start by identifying what your treatment group will be (likely a Geo otherwise you could A/B test)
Then identify another group that acts similarly in the Pre period
Measure the difference in the Pre period (difference # 1)
Then, monitor the relationship after the campaign starts
Measure their difference in the Post period (difference # 2)
Measure the difference in the differences (difference # 2 - difference # 1)
Step 6 is then your estimate for what the impact of your campaign is.
Drawbacks
Diff-in-diff requires a stable relationship between the 2 groups in both the pre period and the post period.
This means that all the challenges with Pre / Post also apply here.
Let’s revisit our Paris / London example.
Once you launched the campaign, you noticed a huge difference between London and Paris after the launch of your campaign.
Looks like your campaign is crushing it!
Except it’s July 2024 and a little thing called the Olympics is happening in Paris at the same time as your campaign.
Unfortunately, London doesn’t also have the Olympics going on.
The relationship between London and Paris have totally changed from the assumptions we were using in the pre period.
You can no longer use a Diff-in-diff methodology to estimate the impact.
This is an extreme example and any good Marketer should know the Olympics is coming up but it’s representative of all the hurdles you have to consider when using a Diff-in-diff methodology.
When to use it
If you’re a startup that understands its seasonality and has multiple cities, products, or other comparable factors that are big enough to be compared.
Even then, you have to make sure you’re not shipping any major product changes or increasing budgets in one group or another to impact the test.
Diff-in-diff is also a good methodology just to sense check your results but not as the primary tool for measurement.
It’s best for measuring offline channels and simpler tests.


