Trust and Safety Aren’t Industry Problems—They’re Client Problems

Trust and safety are not research industry responsibilities—they are client responsibilities, because trust is ultimately a personal, narrative-dependent judgment.

James Snyder in the September 1, 2025 issue of Quirk’s Marketing Research Review writes a fine yet obvious piece called Building resilient research marketplaces through trust and safety. He makes the case for digital methodologies, having evolved and scaled, brings out risk from “online survey fraud, compromising data quality.”

His premise is that Bot activity, survey manipulation etc. “have made trust and safety no longer optional but essential.”

My question is simple: when have trust and safety been optional?

There’s no question that safety and trust are in the crosshairs these days; Snyder himself points out, “fraud and data integrity failures are systemic. They exploit gaps in communication, process design and incentives.” Spot-checking work doesn’t cut it, giving us false senses of security. He concludes we have to, “to build systems and cultures that assume fraud is inevitable and that are designed to catch it before it [fraud]causes harm.” He proposes to do this by moving from being reactive to proactive.

Sounds simple enough. Snyder gives a variety of examples on how to do this, including investing in tooling and data architecture “that allow for more granular insight.” His specific example is a “centralized identity graph for real-time traffic validation.”

Pick a Word, Any Word

Years ago I wrote a blog: “Pick a word, any word.” Google is kind enough after all these years to put me with the first three results when you google those words – and doesn’t even stretch itself to add its AI component to the search. In that post, I pointed out about a CEO who wanted to change his company’s direction and “broaden our risk appetite.” When I heard that, I was reminded many years ago about a chart I saw. This chart had three columns of words, numbered “0” through “10” and the person looking at the chart could pick three random numbers, combine the words, and, well, sound really, really smart.

So say 8-3-7 — and combine the words from the columns, so that in this case, the words are: “Compatible reciprocal projection.” Wow. Or how about “Functional third-generation concept” — a “4-8-5.” Sounds equally intense. Kinda like :real-time traffic validation” or “centralized identity graph.” I’m just sayin’. Check it out my blog for yourself.

The point is that Synder’s conclusion – “to be effective, data needs to be clean, connected and centralized” isn’t new, and that to do that you have to avoid “siloing of trust and safety work.” So when he writes, “Quality should be part of the way sales talks about value,” it’s one of the “duh”obvious moments: what else should you be talking about except value and quality?

These intangible words like all intangible ones including “trust” and “safety” are the words that cause all the problems. Actually, defining them and agreeing on definitions of what they mean are the crux of all research projects.

Snyder’s article is about trust and safety, and the dirty secret is that you either trust or you don’t. You are safe, or you’re not. And since we’re dealing with words – words written or spoken by someone – it always comes down to WHO is the narrator of those words, that data?

The Importance of Narrators

There are only two types: first person, and third person. I pointed this out in my recent post Oh What a Tangled Web: Lies, Narrators, and the Fragile Nature of Belief. I noted among other things, “The problem with life is that we are all first-person narrators in this story – our story. Once you lie, you lose your reliability and really can never get it back.”

That’s what trust is all about, isn’t it? Believing what you are TOLD by others, believing the data– data produced by a first-person narrator.

You either believe, or you don’t. You are, or are not. There is no middle, grey-area ground for trust or safety.

So the problem is this: can you ever trust 100% or be 100% safe?

Obviously, not. But then, you can’t trust 80% or be safe 80% — or can you?

Maybe the real question is what disrupts trust and safety in data? We assume going in that the results will be true and that our conclusions will be “safe.” But assuming is that old story, isn’t it – the ass-out-of-u-and-me.

Besides, mistakes happen. We’re human. Yet in narration, are the mistakes mistakes, or are they bona fide designed to misinform (i.e., the fraud Synder discusses).[1]  Synder aptly points out, “Cleaning bad data often means rerunning surveys, reanalyzing results, and reassessing vendor relationships.” Yet that’s the nature of change, isn’t it?

Change

Heraclitus said we can’t put our foot in a river in the same place. A wise man, and the picture you get from a dataset is just that: a picture – one foot in one river at one point in time. You either believe it, or you don’t. You can keep putting your foot into the river over and over but you will always get a difference. It won’t ever be the same.

Synder’s example for all this is Coca-Cola’s 1985 failed launch of New Coke. That surprised me because that research failure was pre-AI. Was the data flawed? I doubt it. Was the data flawed for the recent rebranding of Cracker Barrel? Actually no. In both cases the audience sampled was, or the methodology was. Or something else (i.e., fraud).

Data are data. A “1” is a “1” and a “0” is a “0”. Where the data come from is really the key to good or bad data, to fraud or an honest mistake. The “where” is always the root cause.

When Synder says, “People are at the heart of successful trust and safety initiatives,” he states the obvious, but it’s a mis-direction: it’s not the research community, but the clients who assume the awesome responsibility of accepting or rejecting perceptions. “In a world where trust is constantly under attack, human judgment is still one of our most powerful defenses,” he tells us – as if human judgment is infallible. But he’s right: our judgement is our only defense against misperception, regardless of the source.

His essay ends in a kumbaya moment calling for collaboration throughout the industry. But it’s not the industry’s responsibility for fighting fraud: it’s the client’s who hire the industry. Questions of who you believe, what you believe, why you believe what is put in front of you has everything to do with the receivers – not the senders.

Trust and safety are not industry responsibilities at all—they belong to the client. And far from absolving researchers of the need to validate their data, putting the “last call” on clients actually demands more validation, not less. When the client carries the ultimate responsibility for believing—or rejecting—what research tells them, it forces the client to interrogate the data, question its origins, challenge its assumptions, and require transparency from the people who produce it.

Trust is not something the industry hands over wrapped in jargon or secured by new tools; trust is something the client must actively construct. Researchers can support that process, but they cannot substitute for it. In the end, belief is always a judgment made by the receiver, not the sender.

In advertising, it’s said perception is reality. In research, data shapes perceptions. The job of researchers is to find out and tell their clients what they find with the reality of data. Whether or not the data is real has to be judged by the receiver – the client. That’s the judgments we all make when we perceive anything.

Whether or not we believe what research finds and tells us, well, that’s ultimately up to each one of us. It always was like that, and it will always be like that. Let me hear from you!

_______________________________________________

[1] In Mistakes, I asked, what do you do when you make a mistake? Do you fess up and admit it as fast as you can? Run away? Mistakes are a chance to show your best because they are a problem to be solved. It doesn’t matter if you made the mistake or someone else made it; it’s a problem. But if you made the mistake, how you act after you discover it tells a lot. But what Synder is talking about isn’t mistakes: its cheating. And it’s the person being cheated that has the responsibility to detect the cheating.

For more insights follow interlinejim@twitter

Leave a Reply

Your email address will not be published. Required fields are marked *