Marketing is all about generating ideas and picking the best ones to execute. Really good marketing is about testing to make sure our ideas (and execution) are as good as we think they are. But in my 210 dog-years in this crazy game, I’ve seen more potentially good ideas killed – without resorting to thinking much less testing – by three little words that kill all debate: “We tried that.”
“We tried that” murders ideas earlier and more thoroughly than any other objection. It appeals to the scientist in all of us and invokes direct, observed experience to conclude that a given strategy, tactic or technique won’t work.
It still amazes me how these words invariably go unchallenged. As if the idea in question has been subjected to a thorough, scientific trial and has been proven to fail. And that the data is there for all to see should they bother to read their email attachments.
Nothing is further from the truth. “We tried that” is, in fact, an almost meaningless statement disguised as unassailable fact. To question it is to declare oneself the idiot condemned to repeat history.
But if instead of simply accepting the “Been there/done that/flopped” death sentence, we actually paused to question it, a lot of really good ideas would live to see the light of day (as would lots and lots of, let’s be honest, lame ones).
Why? Because real-world marketing isn’t a controlled lab experiment executed in clean rooms by white-coated, be-goggled automatons – it’s a messy, multivariable, context-dependent crap shoot in the noisy, boozy casino of modern, multichannel life.
Putting aside the frighteningly rare cases of real A/B and multivariate testing, most of what marketers ‘learn’ amounts to a proto-hunch based on that one time some guy who used to work here reportedly stuck his finger in the air and announced which way the wind appeared to be blowing.
Any time we try anything, we’re really trying many things.
When we run a PPC ad, we’re testing the copy, search term, offer, URL, search engine, budget, bidding dynamics, region, time of day and competitive frame.
When we send a simple piece of direct mail, we’re testing a unique combination of list, medium, offer, copy, design, format, day-of-the-week, season, region and phase of the moon.
When it flops, we have the right to conclude that this specific piece of direct mail didn’t work on this occasion. We emphatically do not have the right to conclude, “Direct mail doesn’t work. We tried that.”
And yet that’s exactly what happens nearly every day in every marketing department – whether out loud or internalized. We enshrine false conclusions that close down our options. We kill viable ideas before even considering them. And that’s a way higher price than we should be willing to pay.
It’s probably a primitive survival thing. Our brains are wired to draw generalized conclusions based on limited evidence. It’s why we run from a Brown Bear even though our cave-mate was mauled by a Grizzly (the wisdom of which is solidly reinforced when the guy who stays put and says, “Dude: wrong species” gets his face re-modeled).
Try this: next time someone says, “We tried that,” pause the conversation for a minute and ask a few questions. Find out what was actually tried and how valid the conclusions are. You won’t need to be Hercule Poirot (or Marie Curie or any other French-speaking evidence-based celebrity) to discover that the experience referred to has almost zero validity in the context of the new discussion.
But we rarely do question the finality of “We tried that.” Everyone nods and the next idea is dragged on to the table for its own ritual slaughter. (Geez, do I sound bitter here?).
Bottom line: bad marketing science is dangerous. And we should demand that it at least get itself a half-decent disguise – with data and control groups and stuff – before we accept it as real science. “We tried that,” just doesn’t cut it.