Why you need to read this

The biggest obstacle for Marketers to succeed is their inability to prove value to the business. CFOs don’t believe in the ROI. CEOs don’t see the growth.

While most marketers obsess over the creative brief they don’t realize that the measurement brief is an integral part of successful campaigns. Without it, campaigns get derailed from a lack of clarity and disagreement on whether the campaign succeeded or failed.

In this article, you'll learn what a measurement brief is, why it's essential, and how to create one for your next campaign.

What is a measurement brief

Sample campaign brief

A measurement brief is not a companion to the campaign brief. It's a part of it. It has the simple job of outlining how you're going to measure the success of the campaign.

If the creative brief is what to say, the measurement brief is how to know if it worked. What it really does is it aligns marketers, analysts, and other stakeholders on what success looks like.

Now you might not think you need a measurement brief, but let me know if you've ever launched a campaign and then fallen into one of these situations:

  1. Not sure if it actually worked

  2. Present conflicting signals because one metric is up and the other down

  3. Stakeholders question the validity of what you’re presenting

Now you've lost the entire trust of your audience because it's unclear if the campaign delivered on its promise. Not only that but people think you’re just picking random numbers to make it seem like your campaign did well. This is a common knock against Marketers and a lot of this can be avoided with a really strong measurement brief.

Here's exactly what a measurement brief will help you with:

  1. Avoiding vanity metrics

  2. Building trust with stakeholders

  3. Preventing post-campaign debates

  4. Prioritize analytical resources up front

  5. Providing accountability and next steps

Now, let’s dig into what a measurement brief is.

Elements of a measurement brief

First, I'll go through the elements of a measurement brief. Then, I’ll provide an example template that you can copy and use in your own business. Finally, I'll write an example measurement brief so you can see what it looks like.

Context and objectives

The first part of the measurement brief is the campaign context and objectives. This is just a repetition from the broader campaign brief, but it’s good to reiterate the following:

  • What insights do we have to create this campaign?

  • What is the goal of this campaign?

It’s simply setting the stage for the measurement brief.

Learning agenda

The second part is the learning agenda. In this section, you’re clearly outlining what you’re trying to learn from the measurement set up.

Remember, that all measurement efforts in the world are there to help you learn something. Did this happen or did this not happen?

Some key questions to answer might be:

  • What's our hypothesis?

  • What are we trying to learn?

When you skip this part, it inevitably means you don’t have clarity on what you’re trying to learn from this campaign and that makes it harder to set up a robust measurement plan.

Success metrics

The third part is the success metrics. This is one of the most important steps where a lot of campaign measurement falls apart. You need to very clearly outline the primary metric, secondary metric, and guardrail metrics.

Let's go through what each of those means:

  • The primary metric → the sole metric that you use to determine success or failure of a campaign. It doesn't have to be a business objective. This is the most important metric to align on.

  • Secondary metrics → Metrics that you’re curious about and want to observe and monitor but for learning purposes. It doesn’t determine success or failure.

  • Guardrail metrics → metrics that you cannot hurt in a negative way (and don’t plan to) but want to monitor and observe to make sure. This is different from secondary metrics because these can be stopper metrics that prevent the campaign from moving forward if they are impacted

The biggest pitfall after a campaign is trying to use a secondary metric to justify the result of the campaign.

You: “Oh but look, our followers went from 1 → 5 . We saw 400% increase!”

Everyone else: 💀

Methodology

The fourth part is the measurement methodology.

Here’s where you need to answer 3 key questions:

  1. How will metrics be calculated?

  2. What data sources are used?

  3. What type of experiment method are we using?

This is the most technical part of the brief and requires very tight collaboration with data science.

This is also a good place to share that you will not be measuring a campaign. It is very realistic and valid to say that you can not use causal or other scientific methods to measure a campaign in a robust way.

For example, if you have a national TV campaign then you might not be able to measure it and sharing that ahead of time is totally okay as long as it’s aligned with everyone.

Reporting roles & responsibilities (RR&R)

The fifth part is the reporting roles and responsibilities and responsibility.

It sounds simple but teams make a lot of assumptions on who is going to deliver what.

Here’s what you need to ensure is in this section:

  1. Who's accountable for pulling data?

    It might be a singular point of contact on the data science team or it might be a broader team, but you need to have accountability.

  2. What metrics will be shared?

    Are we covering all the metrics in every review or will it only be certain ones? What level of rigor? Will it just be directional?

  3. When will results be reviewed?

    Will we review it weekly, monthly? when is the final deliverable date?

  4. How will the results be provided? ‘

    What is the mode of delivery? Is it a dashboard or a one pager or some other format?

This might feel like overkill but all of this needs to be outlined so you have milestones, accountability, and structure.

Risks and assumptions

The last part is risk and assumptions.

This section allows you to have a lawyer style addendum that covers your butt but also adds validity and credibility to the campaign. Companies are tolerant of risk. They are not tolerant of surprise risk.

This section helps to mitigate risks. Here, you’ll want to cover:

  1. What could skew the results or what risks do we have in tracking?

  2. Are there gaps in data?

  3. Is there a data lag that we will have to account for?

  4. What assumptions are we being made?

  5. What is the baseline growth we’re starting from?

Anything that could derail the campaign itself or measurement should be included here.

Template

Below is an example template. It's super simple, and we'll fill it out together so you can see what it should look like at the end.

  1. Context and objectives

    1. Campaign Objective

    2. Business Goal Link

  2. Learning agenda

    1. Key Questions

  3. Success metrics

    1. Primary Metrics

    2. Secondary Metrics

    3. Guardrails

  4. Methodology

    1. Strategy

    2. Data Sources

  5. Reporting Roles & Responsibilities

    1. Owners

    2. Reporting Cadence

    3. Deliverable

  6. Risks and assumptions

    1. Risks

    2. Assumptions

Example measurement brief

  1. Context and objectives

    1. Campaign Objective

      Based on our brand tracking, we found that our brand awareness is 15 percentage points behind competitor A. To change this, we're now launching a new campaign with the goal of improving awareness.

    2. Business Goal Link

      While the primary goal of this campaign is awareness, we also expect to see some impact on sign-ups and installs in short term (< 3 months).

  2. Learning agenda

    1. Key Questions

      The primary goal of this campaign is to answer the question of "Does the message that we've created increase unaided awareness with our audience?"

      While we've done creative pre-testing, we want to ensure it works at a broader scale with our goal of increasing awareness. We also want to understand if a multi-channel campaign can drive this impact.

  3. Success metrics

    1. Primary Metrics

      The primary metric of this campaign is awareness (unaided awareness).

    2. Secondary Metrics

      We're also going to be looking at signups, installs, and downloads in addition to consideration.

    3. Guardrails

      As this is our first campaign of this type, our guardrail metric is cost per incremental reach. We want this to be under $X, with the goal of optimizing and improving this over time. Another guardrail metric is volume of signups. We expect this to stay flat to go up. However, accounting for seasonality, we don't see any increasing depreciation in signups for some reason.

  4. Methodology

    1. Strategy

      For the primary metric, we're going to be using survey-based brand list studies on Meta.


      For the business metrics, we're going to be doing a geo holdout test in which cities A, B, C, D, and E will be treatment, and cities F, G, H, I, J, and K will be control.

    2. Data Sources

      The brand awareness will be through Meta's brand-lift study. The business data will be through our own internal APIs. We'll be using the X table and the Y table.

  5. Reporting roles & responsibilities

    1. Owners

      Brand Awareness Metrics will be owned by our Paid Ads Team, and the internal metrics will be owned by our Data Science Team.

    2. Reporting Cadence

      We'll be doing monthly Brand Lift Studies for the next 3 months. The first internal business metric will be after three months to allow for enough signal from the Brand Awareness Campaign.

    3. Deliverable

      At one month, we'll send out an invite to a meeting to review and see if we still have a go/no-go. And then after that, the next point will be at three months where we share the cumulative impact of the Brand Awareness Campaigns and to have our first early signal readout from the internal metrics. This will be packaged in a deck.

  6. Risks and assumptions

    1. Risks

      There's no immediate risk on the brand awareness.The brand lift studies are well-documented and set up, so there should be no concern there.

      On the internal metric side, the biggest risk is that the treatment and control cities begin to deviate. We use a x-month look-back period to ensure that these are viable, comparable cities, but it's impossible to predict that they will maintain that comparison.

    2. Assumptions

      In regards to assumptions, we assume a current baseline brand awareness of X based on the most recent survey we did 3 months ago.

As you can see, the example template is not that complex but it also lists a lot of key assumptions and aligns teams. I can almost guarantee that when you start to include measurement briefs, the conversations around measurement start to improve as well.

Teams begin to realize what they’re required to provide and how to think about campaigns differently. It’s a magic feedback loop.

Let’s go over some best practices and pitfalls and then wrap it up.

Best practices

  1. Start simple, don’t over-engineer.

    You might be tempted to use a complex MMM or some other modeling tool for this, but geo holdouts and A/B tests are still the gold standard. Start simple and try out and experiment with your measurement strategies because you won't nail it on the first time. As you get more sophisticated, the gold standards remain the gold standards. Don’t hesitate to draw the line on what you can and can’t measure. You can’t force the science.

  2. Align with finance, data science, and leadership early.

    The most important stakeholder here is the data science team. If you come the day before and ask them to create a measurement brief, they will not be able to.

    The rule of thumb that we used was, however long the campaign was, you should give the data science team about half that time as a heads up. So if it's a 3-month campaign, give us about 1.5 to 2 months just as a primer that you're going to launch this campaign, and then we can tell you, "Hey, we need more time. We need regular check-ins etc."

  3. Document assumptions clearly.

    Documenting the risks and assumptions is extremely important. Over time, they become well-known to other teams, but early on, if this is your first time doing this, it's important to call out things like:

    • Geo holdout tests don't always hold

    • A/B testing requires a certain sample size

    Documenting these early ensures that teams are aligned and understand that this is not a risk-less exercise and that there are possibilities of failure.

Pitfalls

  1. Too many metrics

    There is only one golden rule: you can only have one primary metric.

    Also, try not to have too many secondary metrics and guardrail metrics. The goal of designing metrics is not to eventually find a metric that makes your campaign look good. It's to ensure that you are measuring and driving the primary business goal or goal of the campaign.

  2. Misaligned assumptions.

    Depending on your methodology, it's important to ensure that you're aligned on how you're going to measure these campaigns. For example, in the signups example that we used, is it going to be paid signups? Is it going to be blended signups? Can be all sign ups? You should be very clear on what attribution and what methodology you're going to use for these business metrics.

    These assumptions are often the unspoken wishes for both teams and it’s what trips up most teams.

Wrapping up

A great creative brief gets the campaign out the door. A great measurement brief ensures you know it worked. This helps you build trust with your team, get more budget, and unlock a new appreciation for how your campaigns are working.

The next time you run a campaign, try adding a measurement brief. Just watch how much smarter your marketing will get. The assumptions, the objectives, the goals. They all become more clearly defined, and you'll find teams rallying around ensuring that your campaign crosses the finish line.

What’d you think after reading this?

Was this article helpful?

Login or Subscribe to participate

Enjoyed reading this?

Share with your colleagues or on your LinkedIn . It helps the newsletter tremendously and is much appreciated!

Reply

Avatar

or to participate