June 18, 2008

Searching Research


S†MoN® says:

Research isn’t science.

Here’s how advertising works: You toil for weeks to come up with a good solution to your client’s problem. Then your campaign is taken to an anonymous building on the outskirts of town and shown to a “focus group” – people who’ve been stopped on the street the previous week, identified as target customers, and paid a small amount of money for their opinion.
After a long day working at their jobs, these tired pedestrians arrive at the research facility and are led into a small room without windows or hope. In this barren, forlorn little box, they are shown your work in its half-formed state while you and your client watch through a two-way mirror.
Here’s the amazing part. These people all turn out to be advertising experts with piercing insights on why every ad shown to them should be force-fed into the nearest shredder fast enough to choke the chopping blades.
Yet who can blame them? They’ve been watching TV since they were kids and have been bored by 100,000 hours of very, very bad commercials. Now it’s payback time, Mr. Madison Avenue Suspenders Man. And because they’re seeing mere storyboards they think, wow, we get to kill the beast and crush its eggs.
Meanwhile, in the room behind the mirror, the client turns to you and says, “Looks like you’re working the weekend, Idea Boy.”

Welcome to advertising.


A committee, it has been said, is a cul-de-sac down which ideas are lured and quietly strangled. The same can be said for the committee’s cousin, the focus group. But this research process, however wildly capricious and unscientific, is here to stay.
Clients are used to testing. They test their products. They test locations for their stores. They test the new flavor, the packaging, and the name on the top. And much of this testing pays off. So don’t think they’re going to spend a couple of million dollars airing a commercial based solely on your sage advice: “Hey, business dudes, I think this spot rocks.”
Used correctly, research is great. What better way is there to get inside the customer’s head? To be what Marshall McLuhan called the “frog men of the mind” and find out what people like and don’t like, to understand how they live. There is no better way. Most of the good research isn’t done in little buildings outside of town, either, but right downtown in the bars, asking drinkers about their favorite booze; asking shoppers how they choose a product; eavesdropping on real people as they walk about a category brand.
The point is, the best people in the business use research to generate ideas, not to judge them. They use it at the beginning of the whole advertising process to find out what to say. When it’s used to determine how to say it, great ideas suffer horribly.
Should your work suffer at the hands of a focus group, and it will, there isn’t much you can do except appeal to the better angels of your client’s nature.
What follows are some arguments against the reading of sheep entrails. Or the subjective science of copy testing.


Testing storyboards doesn’t work.
Testing, by its very nature, looks for what is wrong with a commercial, not what is right. Look hard enough for something wrong and you’ll sure enough find it. (I could stare at a picture of Miss November and in a half hour I’d start to notice, is that some broccoli in her teeth? Look, right there between the lateral incisor and the left canine, see?)
Testing assumes people react intellectually to commercials, that people watching TV in their living rooms dissect and analyze these interruptions to their sitcoms. (“Honey, come in here. I think these TV people are forwarding an argument that doesn’t track logically. Bring a pen and a paper.”) In reality, you and I both know their reactions are visceral and instantaneous.
Testing is inaccurate because storyboards don’t have the magic of finished commercials. Would a focus group approve this copy had it just been read to them? “Chestnuts roasting on an open fire. Jack Frost nipping at your nose.” Probably not. I can just see a focus group putting down his doughnut to protest: “I hate those chestnut things. And also, who wants to sing about wind chill? Can’t the song be about something happy?” But the fact is, millions of people have been charmed into buying this simple holiday song after hearing the recorded version.
Testing rewards commercials that are derivative because commercials that have a familiar feel score better than commercials that are unique, strange, odd, or new. The very qualities that can lift a finished commercial above the televisions clutter.
If tone is important to a client, testing is inaccurate because 12 colored pictures pasted to a board will never communicate tone like actual film footage, voice-over, and music.
Testing, no matter how well disguised, asks consumers to be advertising experts. And invariably they feel obligated to prove it.
Finally, testing assumes we really know what makes a commercial work and that it can be quantifiably analyzed. It can’t. Not in my opinion. It’s impossible to measure a live snake.
Bill Bernbach said, “We are so busy measuring public opinion, we forget we could mold it. We are so busy listening to statistics that we forget we could create them.” This simple truth about advertising is lost the minute a focus group sits down to do its business. In those small rooms, the power of advertising to affect behavior is not only subverted, it’s reversed. The dynamic of a commercial coming out of the television to consumers is replaced with consumers telling the commercial what to say.
I say, big deal if a group says your storyboard doesn’t reflect their opinions. With a good director and a couple of airings on the right programs, their opinions may reflect your commercials.
These arguments, for what they are worth, might come in handy someday, especially if you have a client who likes the commercial you propose, but has to defend poor test scores to a management committee.


Extensive research has proven that extensive research is often wrong.
From a book called Radio Advertising by Bob Schulberg, I bring to you this research study to your attention:
J. Walter Thompson did recall studies on commercials that ran during a heavily-viewed mini-series, “The Wind of War.” The survey showed that 19 percent of the respondents recalled Volkswagen commercials; 32 percent, Kodak; 32 percent, Prudential; 28 percent, American Express; and 16 percent Mobil Oil. The catch is that none of these companies advertised on “The Wind of War.”
In the mid-‘80s, research told management of the Coca-Cola Company that younger people preferred a sweeter, more Pepsi-like taste. Overlooking fierce customer loyalty to this century-old battleship of a brand, they reformulated Coca-Cola into New Coke, and in the process packed about $1 billion down at a rat hole.
“We forgot we could mold it.”
Research people told writer Hal Riney that entering the wine cooler category was a big mistake. Seagram’s and California Cooler had it locked up. Then Riney began running his Bartles & Jaymes commercials and a year later his client had the Number 1-selling wine cooler in America.
“We forgot we could mold it.”
Research people told writer Cliff Freeman when he was working on Wendy’s hamburgers, “Under absolutely no circumstances run ‘Where’s the Beef?’ ‘ After it ran, sales shot up 25 percent for the year and Wendy’s moved from fifth to third place in fast-food sales. The 20,000 newspaper articles lauding the commercial didn’t hurt either.
“We forgot we could mold it.”
And what some call the greatest campaign of the twentieth century, Volkswagen – none of it was subjected to pre-testing. The man who helped produce that Volkswagen campaign had a saying: “We are so busy measuring public opinion, we forgot we can mold it.”


Because focus groups can prove anything, do they prove anything?
British ad star Tim Delaney, in a famous article on the value of intuition, wrote:
Have you noticed what happens when five agencies are competing for an account? They all come up with completely different strategies and ideas – and yet, miraculously, each of them is able to prove, through objective research, that their solution is the right one. If nothing else does, this alone should devalue the currency of focus groups. Researchers think that if you send a lot of time analyzing a problem beforehand, it will bring you closer to the advertising solution. But the truth is, you only really begin to crack advertising problems, as you get deeper and deeper into the writing. You just have to sit down and start writing on some kind of pretext – and that initiates the flow of ideas that eventually brings a solution. In my agency, we start writing as soon as possible, before the researchers have done their analysis. And we usually find that the researchers are always trailing behind us, telling us things we already thought of.
The writing itself is the solution to the problem. It’s in the writing itself that the answers appear, when you’re in there getting your hands dirty mucking about in the mud of the client’s marketing reality. The answers are right there in that place where there’s direct contact between the patient and the doctor. And if the doctor has a question about how to proceed, who would you want him to ask for advice? A focus group of grocers, lawyers, and cab drivers? Personally, I’d want it to be another doctor.
I have a friend who walks around the agency trying to find out if a concept he’s done is any good. He keeps going around until the “it’s cool” votes outnumber the “it sucks”. Sometimes he doesn’t get the answer he wants and keeps working.
You know what? In my opinion, it’s the only pre-testing that works. The agency hallway.


Science cannot breathe life into something. Dr. Frankenstein tried this already.
David Ogilvy once said that research is often used the way a drunk uses a lamppost: for support rather than illumination. It’s research used to protect preconceived ideas, not to explore new ones.
Another way that research can be used poorly is what I call “Permission Research.” Permission Research happens when agencies show advertising concepts to customers and ask if they like them or not. (“Can we air this? Oh, please?”)
What’s unfortunate about permission Research is that clients and agencies to validate terrible advertising often use it. Yes, it all looks and sounds like science, but as prudent as such market inquiries appear on the surface, the argument is specious. Because the very process of Permission Research and all its attendant consensus and compromise will grind the work into vanilla.
As an example of what the process of Permission Research can do, I cite an interesting and very funny study done in 1997 by a pair of Russian cultural anthropologists – Komar and Melamid. With tongue firmly planted in cheek, these two researchers set out to ask the public, “What makes for a perfect painting? What does a painting need to have in order for you to want to hang it in your home?”
They did massive amounts of research, hosting hundreds of focus groups all over the world. Their findings, meticulously prepared and double-checked with customers, were as follows: 88 percent of customers told them, “We like paintings that feature outdoor scenes.” The color Blue was preferred by 44 percent of respondents. “Having a famous person” in the painting got the thumbs up from a full of 50 percent. Fall was the preferred season. And animals! You gotta have some animals.
All this research was compiled and a painting was commissioned. The final “art” that came out of the lab (to nobody’s surprise) was very bad.
The point? Research is best used to help craft a strategy, not an execution. As journalist William F. Buckley once observed, “You cannot paint the Mona Lisa by assigning one dab each to a thousand painters.”


Market research is like the weather forecast: You’ve got to take it into account, but you can’t trust it.
Market research is one obvious way in which we try to get a grip on the shape of the market, just as the weather forecast is one way we try to get a grip on the likely ‘geography’ of a yacht race. The weather forecast is useful, but it won’t tell you how to win a race. It can occasionally be very inaccurate and is only ever a guide to likely conditions. Your competitors probably have the same information.
Exactly the same limitations apply to market research as to weather forecasts. We should treat all qualitative and quantitative information in the same way in terms of its predictive ability. The lesson is that, although we should have a point of view, we need to be prepared to deal with times when things are not to be as we thought they would be.

"We stand in our shadows and wonder why it is dark".

End

No comments: