“Measurement of [TV's] effectiveness is based on samples, not actual viewers, and often the best data you get about the audience is generalized demographic / psycho-graphic information.”
It’s no secret that I have strong opinions when it comes to measurement and research.
My concern is simple: I don’t believe we’re measuring the right things.
It’s time we changed that.
This post outlines an alternative approach to ad measurement, but it still needs some tweaking, so I’d really appreciate your suggestions on how we might improve it.
Let’s begin with some context…
The role of measurement
As early as the mid-1800s, John Wanamaker remarked:
His fears have been echoed by marketers ever since, and we continue to invest huge sums trying to identify which of our dollars are wasted.
However, this focus on wastage means we’ve been missing the forest for the trees; in order to understand how hard investments are working, we first need to understand whether campaigns are delivering on our objectives.
Advertising success is not just about efficiency; we also need to measure its effectiveness.
To examine these factors in context, we first need to understand the objectives we hope to address with advertising – in other words, what do we want our brand communications to achieve?
Why do we communicate?
At a fundamental level, communication serves a very simple purpose:
To create a shared understanding between two or more people
It follows that the purpose of brand communications is:
To create a shared understanding between a brand and the people it wishes to influence.
So, in order to measure advertising’s effectiveness, we simply need to determine whether the audience has understood what the brand intended.
To measure the campaign’s efficiency, we need to compare the proportion of the audience that correctly understood the message with the different campaign elements they’ve experienced, and the cost of those different elements.
In light of the above definitions, it seems logical that both measures centre on the audience’s level of understanding.
So why do we consistently resort to metrics that have so little to do with what really matters?
Reach (as media agencies currently use the term) is simply a projection of potential audience size. This is the concern that John highlighted in his comment above: reach doesn’t tell you whether anyone actually witnessed your communications, and it gives no indication of whether those who did witness them understood anything.
Meanwhile, frequency is equally limited in its value, informing us of little more than the number of opportunities each individual had to witness the campaign (but again, not telling us if they actually did see things that many times). I’ve talked about frequency’s limitations before, so I’ll avoid going into any more detail here.
The problem with these metrics is that they equate volume with success. However, the more you shout at people, the more they’ll try to ignore you.
Even ‘brand health’ metrics are compromised when it comes to determining advertising’s impact, because they tend to look at a brand’s performance in aggregate, rather than just the performance of its communications. This macro view means that we cannot determine the campaign’s influence ceteris paribus, and consequently, we cannot ascribe changes in brand health scores solely to advertising.
So what can we do to improve advertising measurement?
Start out right
A simple improvement would be to focus on objectives.
It’s vital that brands and their agency partners develop campaigns around what the brand wants its audiences to understand – although it’s a widely misused term, I’ll refer to this as ‘the message’.
Only once we’ve agreed this message can we be sure that we’re developing the most efficient and effective communications.
An example might help to put this in context.
A popular brand of face cream wants to grow revenues amongst existing users in Thailand. Traditional ‘brand health’ research shows that the brand is well liked and respected, but that its use is sporadic, with medium and light users applying the product just a few times each month.
Face-to-face conversations with these consumers reveal that they think it does a great job of rehydrating their skin, but the humid climate in Thailand means that this benefit is only relevant on a few days each month.
The brand recognises that it needs to modify people’s perceptions to help them see the brand as an everyday cosmetic, rather than simply as a functional moisturiser. In other words, it needs to tell them that it offers a bonus, and not just a remedy:
Further research reveals that this audience uses moisturisers to “bring their skin back to life.” When their skin is dry, they think that it looks “flat” and “tired”, but when it’s properly hydrated, they believe it exudes a “radiant glow.” A closer look at the research indicates that this radiance is not only a critical category driver, but that the brand already scores highly on this attribute.
Brand management believes that focusing the audience’s attention on this motivating benefit will encourage them to use the brand more often. They brief a campaign that drives the perception that “using BrandX every day gives my skin a radiant glow.”
This clear statement of what the brand wants people to understand ensures that everybody works towards the same objective. Rather than simply trying to ‘raise awareness’, each agency knows what it must communicate, and can therefore optimise its approach to focus on this outcome.
Measure what matters
In order to assess the campaign’s contribution, the brand needs to measure how strongly the audience associates this statement with the brand. The research must begin well before any communication takes places, enabling the brand to identify baseline scores against which it can compare the scores obtained during and after the campaign.
It’s important to note that research should only canvass the relevant audience. Communications should always be tailored to appeal specifically to the people the brand wants to influence, so there’s little point in measuring their impact on other people.
I’ve mentioned before that I’m not a great believer in advertising pre-testing, but I recognise that many people feel more comfortable using it. If you do use it, make sure that it focuses on assessing the campaign’s ability to influence this core attitude.
Once the campaign launches, we need to measure its impact on the relevant attitude score(s).
The first thing we must identify is whether respondents have witnessed any of the relevant communications.
Historically, brands have placed great importance on metrics such as ‘top-of-mind awareness’ and ‘unaided recall’. However, the audience’s ability to remember the campaign doesn’t say much about whether or not they understood anything.
I’d argue that prompted research is much more indicative. For a start, significant differences between projected reach and prompted awareness suggest an area the brand will want to research further. It also provides:
A reliable indication of actual campaign reach;
A ‘control group’ of respondents who claim not to have witnessed any aspect of the campaign, whose scores we can compare to those of people who claim they have seen at least one part of it.
The research continues by showing each respondent a selection of activity from the campaign, and asking them to identify all that they’ve witnessed. This can be as simple as showing them visuals or playing them snippets of audio from different executions, and asking them to confirm whether or not they’ve seen or heard these activities before.
Your research agencies will be best placed to recommend the maximum number of examples you can show, but make sure that you include all the channels included in the campaign mix at some point in research. Don’t forget to include things like events and sponsorship if they’ve been part of the plan.
Examining scores across people who have seen a different combination of channels allows us to determine the cumulative effect of different activities. Furthermore, by comparing these results to the scores of those who have only seen a few activities, we can begin to infer the impact of specific channels and creative executions.
The next step is to assess whether the respondent has understood what the brand intended. The best way to measure this is to ask it as an open-ended question, such as “what did you understand after seeing this advert?”
However, this is often impractical due to respondents’ levels of commitment and involvement, and the need for researchers to record exactly what was said.
As an alternative, you can present respondents with a variety of statements about the brand and the campaign, and allow them to indicate the strength to which they agree with them. It’s best to offer a few variations on what the brand actually intended, along with some other statements that have little to do with the intended message, to allow for more reliable findings.
Once researchers have gathered all the responses, the final step involves interpreting the results. This begins by comparing pre-campaign scores, ‘control group’ scores, and the scores of people who’ve experienced different aspects of the campaign.
Following these steps with similar research a few months after the campaign has finished allows you to assess whether the campaign has permanently modified perceptions, or simply influenced shorter-term attitudes.
And that’s pretty much it: a simple, practical, but powerful approach to improving advertising measurement.
So what’s holding us back?
Here are the most common oppositions I encounter when proposing this approach:
“It’s too difficult”
“It would cost too much”
The first point is moot; I’d argue that this process is a lot simpler than the one research companies already use when tracking brand health.
The second point relates to focus: yes, if we ran this process in addition to current measurement, it would add significant cost. However, this approach tells us almost everything we need to know about our advertising, so why would we continue with that other measurement?
A start, not a solution
I don’t pretend that this process is a panacea, and as with all aspects of marketing, you’ll need to adapt it to the specifics of your brand and its context.
However, provided you canvass a sufficient proportion of your audience with focused and relevant questions, this approach should deliver results that are far more informative than most current practices.
What do you think? Which parts need tweaking? What could be added or removed to make it better?
I’d really value your thoughts and comments – please feel free to share them, along with any questions, in the comments section below.
You may also find the following posts useful: