Posts Tagged 'propagandata'

Data Don’t Tell Stories

Data may not lie, but people’s selective interpretation of data can significantly change the stories they tell with data’s support.

Take this piece of analysis from the IDC, “the premier global provider of market intelligence, advisory services, and events for the information technology, telecommunications and consumer technology markets”:

Despite beating Wall Street expectations in terms of shipment volumes, Apple’s share in the worldwide smartphone operating system market posted a year-over-year decline during the second quarter of 2013 (2Q13). Meanwhile, Android and Windows Phone both managed slight increases during the same period. “The iOS decline in the second quarter aligns with the cyclicality of iPhone,” says Ramon Llamas, Research Manager with IDC’s Mobile Phone team. [from here]

Now, let’s look at the actual data:

SMARTPHONE OPERATING SYSTEMS 2Q13

[data source]

Of course, what the IDC notes is true – Apple’s share has declined by 340bp over the past 12 months – but the important part of the story they’ve chosen not to highlight is that Apple’s shipments still increased by 20% during the same period.

My interest here is not whether iOS is going to beat Android though; I’m merely concerned with this skewed representation of an important story.

As  with all propagandata, it’s another case of “torture numbers and they’ll tell you anything.

Always question what the numbers really say, not what the presenter chooses to highlight.

Advertisements

measures of success

fixing marketing measurement

John posted a great comment in response to a recent post:

“Measurement of [TV’s] effectiveness is based on samples, not actual viewers, and often the best data you get about the audience is generalized demographic / psycho-graphic information.”

It’s no secret that I have strong opinions when it comes to measurement and research.

My concern is simple: I don’t believe we’re measuring the right things.

It’s time we changed that.

This post outlines an alternative approach to ad measurement, but it still needs some tweaking, so I’d really appreciate your suggestions on how we might improve it.

Let’s begin with some context…

The role of measurement

As early as the mid-1800s, John Wanamaker remarked:

wanamaker ad dollars quote

His fears have been echoed by marketers ever since, and we continue to invest huge sums trying to identify which of our dollars are wasted.

However, this focus on wastage means we’ve been missing the forest for the trees; in order to understand how hard investments are working, we first need to understand whether campaigns are delivering on our objectives.

Advertising success is not just about efficiency; we also need to measure its effectiveness.

To examine these factors in context, we first need to understand  the objectives we hope to address with advertising – in other words, what do we want our brand communications to achieve?

Why do we communicate?

At a fundamental level, communication serves a very simple purpose:

To create a shared understanding between two or more people

It follows that the purpose of brand communications is:

To create a shared understanding between a brand and the people it wishes to influence.

So, in order to measure advertising’s effectiveness, we simply need to determine whether the audience has understood what the brand intended.

To measure the campaign’s efficiency, we need to compare the proportion of the audience that correctly understood  the message with the different campaign elements they’ve experienced, and the cost of those different elements.

In light of the above definitions, it seems logical that both measures centre on the audience’s level of understanding.

So why do we consistently resort to metrics that have so little to do with what really matters?

Flawed measures

Reach (as media agencies currently use the term) is simply a projection of potential audience size. This is the concern that John highlighted in his comment above: reach doesn’t tell you whether anyone actually witnessed your communications, and it gives no indication of whether those who did witness them understood anything.

Meanwhile, frequency is equally limited in its value, informing us of little more than the number of opportunities each individual had to witness the campaign (but again, not telling us if they actually did see things that many times). I’ve talked about frequency’s limitations before, so I’ll avoid going into any more detail here.

The problem with these metrics is that they equate volume with success. However, the more you shout at people, the more they’ll try to ignore you.

shouting ignorance

Even ‘brand health’ metrics are compromised when it comes to determining advertising’s impact, because they tend to look at a brand’s performance in aggregate, rather than just the performance of its communications. This macro view means that we cannot determine the campaign’s influence ceteris paribus, and consequently, we cannot ascribe changes in brand health scores solely to advertising.

So what can we do to improve advertising measurement?

Start out right

A simple improvement would be to focus on objectives.

It’s vital that brands and their agency partners develop campaigns around what the brand wants its audiences to understand – although it’s a widely misused term, I’ll refer to this as ‘the message’.

Only once we’ve agreed this message can we be sure that we’re developing the most efficient and effective communications.

An example might help to put this in context.

A popular brand of face cream wants to grow revenues amongst existing users in Thailand. Traditional ‘brand health’ research shows that the brand is well liked and respected, but that its use is sporadic, with medium and light users applying the product just a few times each month.

Face-to-face conversations with these consumers reveal that they think it does a great job of rehydrating their skin, but the humid climate in Thailand means that this benefit is only relevant on a few days each month.

The brand recognises that it needs to modify people’s perceptions to help them see the brand as an everyday cosmetic, rather than simply as a functional moisturiser. In other words, it needs to tell them that it offers a bonus, and not just a remedy:

benefit baseline

Further research reveals that this audience uses moisturisers to “bring their skin back to life.” When their skin is dry, they think that it looks “flat” and “tired”, but when it’s properly hydrated, they believe it exudes a “radiant glow.” A closer look at the research indicates that this radiance is not only a critical category driver, but that the brand already scores highly on this attribute.

Brand management believes that focusing the audience’s attention on this motivating benefit will encourage them to use the brand more often. They brief a campaign that drives the perception that “using BrandX every day gives my skin a radiant glow.”

This clear statement of what the brand wants people to understand ensures that everybody works towards the same objective. Rather than simply trying to ‘raise awareness’, each agency knows what it must communicate, and can therefore optimise its approach to focus on this outcome.

Measure what matters

In order to assess the campaign’s contribution, the brand needs to measure how strongly the audience associates this statement with the brand. The research must begin well before any communication takes places, enabling the brand to identify baseline scores against which it can compare the scores obtained during and after the campaign.

It’s important to note that research should only canvass the relevant audience. Communications should always be tailored to appeal specifically to the people the brand wants to influence, so there’s little point in measuring their impact on other people.

I’ve mentioned before that I’m not a great believer in advertising pre-testing, but I recognise that many people feel more comfortable using it. If you do use it, make sure that it focuses on assessing the campaign’s ability to influence this core attitude.

Once the campaign launches, we need to measure its impact on the relevant attitude score(s).

Measuring up

The first thing we must identify is whether respondents have witnessed any of the relevant communications.

Historically, brands have placed great importance on metrics such as ‘top-of-mind awareness’ and ‘unaided recall’. However, the audience’s ability to remember the campaign doesn’t say much about whether or not they understood anything.

I’d argue that prompted research is much more indicative. For a start, significant differences between projected reach and prompted awareness suggest an area the brand will want to research further. It also provides:

A reliable indication of actual campaign reach;
A ‘control group’ of respondents who claim not to have witnessed any aspect of the campaign, whose scores we can compare to those of people who claim they have seen at least one part of it.

The research continues by showing each respondent a selection of activity from the campaign, and asking them to identify all that they’ve witnessed. This can be as simple as showing them visuals or playing them snippets of audio from different executions, and asking them to confirm whether or not they’ve seen or heard these activities before.

Your research agencies will be best placed to recommend the maximum number of examples you can show, but make sure that you include all the channels included in the campaign mix at some point in research. Don’t forget to include things like events and sponsorship if they’ve been part of the plan.

Examining scores across people who have seen a different combination of channels allows us to determine the cumulative effect of different activities. Furthermore, by comparing these results to the scores of those who have only seen a few activities, we can begin to infer the impact of specific channels and creative executions.

Comprehension

The next step is to assess whether the respondent has understood what the brand intended. The best way to measure this is to ask it as an open-ended question, such as “what did you understand after seeing this advert?

shared understanding (2)

However, this is often impractical due to respondents’ levels of commitment and involvement, and the need for researchers to record exactly what was said.

As an alternative, you can present respondents with a variety of statements about the brand and the campaign, and allow them to indicate the strength to which they agree with them. It’s best to offer a few variations on what the brand actually intended, along with some other statements that have little to do with the intended message, to allow for more reliable findings.

Once researchers have gathered all the responses, the final step involves interpreting the results. This begins by comparing pre-campaign scores, ‘control group’ scores, and the scores of people who’ve experienced different aspects of the campaign.

Following these steps with similar research a few months after the campaign has finished allows you to assess whether the campaign has permanently modified perceptions, or simply influenced shorter-term attitudes.

And that’s pretty much it: a simple, practical, but powerful approach to improving advertising measurement.

The barriers

So what’s holding us back?

Here are the most common oppositions I encounter when proposing this approach:

“It’s too difficult”
“It would cost too much”

The first point is moot; I’d argue that this process is a lot simpler than the one research companies already use when tracking brand health.

The second point relates to focus: yes, if we ran this process in addition to current measurement, it would add significant cost. However, this approach tells us almost everything we need to know about our advertising, so why would we continue with that other measurement?

A start, not a solution

I don’t pretend that this process is a panacea, and as with all aspects of marketing, you’ll need to adapt it to the specifics of your brand and its context.

However, provided you canvass a sufficient proportion of your audience with focused and relevant questions, this approach should deliver results that are far more informative than most current practices.

What do you think? Which parts need tweaking? What could be added or removed to make it better?

I’d really value your thoughts and comments – please feel free to share them, along with any questions, in the comments section below.

You may also find the following posts useful:

8 steps to better communications
Propagandata
Anjali shared some great thoughts on a similar subject in this recent post

influencing influence

eskimon's paid opinions

Paid opinions are a hot topic for discussion at the moment.

In the past 24 hours, PSFK, Marketing Pilgrim, and 1000Heads have all shared some great thoughts on the subject.

While reading their posts, it occurred to me that people view this issue quite differently, depending on the context.

That’s not surprising – context is always critical – but which specific elements influences our perspective?

In the ‘offline’ world, we seem to have little issue with paid endorsement.

Sports players invariably endorse the brands they use, and most of us seem comfortable with that.

The thinking seems to be,

“If Tiger’s success depends so heavily on the clubs he uses, surely he wouldn’t compromise his success to endorse a brand he doesn’t trust?”

Similarly, come Oscars time, gossip columns lead with stories on which designer was ‘chosen’ by each celebrity.

“If Angelina’s success depends so heavily on looking great at all times, surely she wouldn’t compromise her look by wearing anything less than the best label?”

Such sponsorship seems acceptable to most people.

But when it comes to sponsored editorial and opinion – especially online – people adopt a very different standpoint.

“If a blogger is being paid to review a brand, their review will inevitably be biased”

Why this change of perspective?

Blogging success is (usually) determined by readership, and that readership depends on the respect and trust of the blog’s followers.

So why would any sensible blogger compromise their success for any brand that pays them?

It seems ironic that, when it comes to sponsorship, we place less faith in the actions of the people whose opinions we normally trust than we do in those of celebrities and sportspeople.

What do you think?

I’d love to hear your thoughts in the comments section below.

propagandata

propagandata2

I’m a great believer in the value of research, but I’m dismayed by the frequency with which findings are distorted in order to endorse or support a particular agenda.

As I’ve noted before,

“Torture numbers and they’ll tell you anything.*”

So it was with interest that I read this headline in MediaWeek:

“Survey: Consumers Don’t Hate Ads”

After reading the article, I dug a little deeper into the source material – the recently published “Nielsen Global Online Consumer Survey: Trust, Value and Engagement in Advertising.”

It’s full of great data, and I’ve been looking forward to this latest iteration of the bi-annual survey.

However, there are two areas in this year’s report that disturbed me.

The first is the conclusion that inspired the MediaWeek headline:

“Consumer perceptions on the value of advertising
are generally positive.”

Let’s look at the data that ‘support’ that conclusion [click the image to enlarge]:

[image taken directly from Nielsen's report]

[image from Nielsen’s report]

You’ll notice that these statements are framed as ‘facts’.

But when the report draws its conclusions on these findings, it states:

“We asked if advertising…

  • increases value for consumers (through competition);
  • promotes consumer choice (helping consumers exercise their right to choose)
  • powers economic growth (by helping companies succeed)
  • creates jobs (through economic growth and as an industry in itself);
  • is the lifeblood of media (funding a diverse, pluralistic media landscape);
  • funds sports and culture (through sponsorship);
  • helps make a difference (through public service advertisements);
  • often gets my attention and is entertaining.”

These ‘questions’ are quite different to the statements in the chart above.

So, do the data really show that “consumer perceptions on the value of advertising are generally positive”?

I’m not convinced.

My second issue relates to a regular concern [again, click the image to enlarge]:

nielsen trust in media 2009 02

[image from Nielsen’s report]

You probably know what’s coming…

“Peer recommendation is the most trusted [advertising] channel, trusted “completely” or “somewhat” by 9 out of 10 respondents worldwide.”

I’ve talked about this before.

‘Peer Recommendation’ / ‘WOM’ / ‘Consumer Opinions Posted Online’ / ‘Editorial Content’ are not advertising channels.

Rather, they are all consequences of other marketing activities.

People trust them precisely because they’re not advertising.

In their true form, they’re unbiased, and that’s what makes them persuasive and trustworthy.

Sure, brands have tried to hijack them and use them as channels, but that invariably generates mistrust rather than trust, as evidenced here.

I don’t dispute the value of word of mouth, but we need to accept that it’s not advertising; brands cannot ‘buy’ these ‘channels’ any more than they can ‘buy’ sales.

Having raised these two concerns though, I fully encourage you to download a copy of Nielsen’s report and study the numbers for yourself.

So long as you approach them with an open mind and an unbiased agenda, you’ll find them highly informative and very useful.

[As a side note, perhaps we should see the report’s conclusions in the context of  this post]

*Thanks again to Kelvin for this wonderful quote.





Twitter

Other Distractions

Advertisements