.st0{fill:#FFFFFF;}

Rachel Botsman Says Trust Enables Everything. Attackers Agree. 

 April 18, 2026

By  Jane Frankland

I've been following Rachel Botsman's work for years.

Partly because she is one of the world's most original thinkers on trust — a subject that sits at the heart of everything I do. But partly, if I'm honest, because of something more personal. Rachel studied art. I studied art and design. There's a particular kind of thinking that an arts education gives you — the ability to sit with complexity, to find the pattern underneath the surface, to ask not just what something is but what it means. I recognise it in her work. I like to think it's in mine.

Earlier this year I heard her speak at a Gartner conference. She was, as always, exceptional. Her central argument — that trust is a confident relationship with the unknown, that it enables action in the face of uncertainty, that it is shifting rather than disappearing — landed with the clarity of something that has been thought about for a very long time.

I sat there nodding. And then I found myself thinking about what she wasn't saying.

Not because she was wrong. She wasn't. But because in the world I've spent thirty years working in — cybersecurity — there is a dimension of trust that her framework doesn't fully reach.

Rachel Botsman explains how trust is evolving.

I explain how that evolution is being exploited.

The Foundation She Built, And What I Saw Standing On It

Botsman's foundational argument is that trust is not a feeling. It's a mechanism. It's how we navigate situations where we cannot have certainty — where we must act despite incomplete information, relying on systems and people whose behaviour we cannot fully predict or verify. She argues that trust is shifting from institutions toward distributed systems, platforms, and individuals — and that we now extend confidence to AI outputs and digital channels in ways that would have seemed extraordinary a decade ago.

She's right. And she's also right that trust is behavioural, not technical. That the issue of trust does not lie in the technology — it lies in leadership, in culture, in the repeated actions that build or erode confidence over time. Trust is earned continuously, not established once.

I apply exactly this thinking in my work on cyber resilience. The technology is almost never the failure point. The leadership and culture around it more often is.

Where we diverge is not in the foundations.

It's in the direction we look from them.

Botsman looks forward and sees opportunity — trust enabling innovation, collaboration, new systems. I look forward and see the same landscape. But I see who else is looking at it.

Trust As An Attack Surface

A finance employee at Arup joined a video call with who he believed was his CFO and several senior colleagues. Every person on that call was a deepfake. He authorised $25 million. He trusted what he saw. What he saw was manufactured.

In what investigators assess with high confidence as a North Korean state-sponsored operation, attackers spent six months building a trusted relationship with a cryptocurrency protocol's contributors — attending conferences in person, depositing real money, having technically fluent conversations over months — before stealing $285 million in twelve minutes. The attack didn't exploit a technical vulnerability. It exploited trust that had been carefully and deliberately constructed.

These are not edge cases. They are illustrations of a strategic shift in how the most sophisticated adversaries in the world now operate.

The same properties that make trust powerful are precisely what make it exploitable. Trust operates on confidence rather than certainty. It has shifted toward systems and signals — AI outputs, platform ratings, digital channels — rather than the human relationships that were harder to fake. If you wanted to design an attack for the modern world, you would design one that operates in exactly that space. You would manufacture confidence. You would poison signals. You would corrupt the models those signals feed. You would make systems behave in ways their operators trust them not to behave.

Botsman describes trust as a bridge connecting us to the unknown. What she doesn't fully account for is the adversary who has learned to engineer the terrain on either side — who controls what the bridge leads to before you've taken a single step.

Technical controls protect the known. Trust exploits operate in exactly the space technical controls don't reach — the space of confidence, perception, and belief.

And AI has changed the scale, speed, and sophistication of what's possible in that space. Deepfakes industrialise the manipulation of perceptual trust. Agentic AI systems — trusted to act correctly without human verification of each action — create new surfaces for exploitation. Influence operations can now be personalised, targeted, and sustained at a scale no human operator could match.

The shift Botsman identified — from institutional to distributed trust — is real and important. But there is a development her framework doesn't fully account for — one that changes the nature of the fragility entirely.

AI and agentic AI have industrialised trust at scale.

Botsman's distributed trust was built person by person, platform by platform, interaction by interaction. The shift was gradual and human in its origins — we extended trust to systems because individual humans and experiences had earned it incrementally. What AI has done is remove that incremental human foundation entirely. Trust can now be manufactured at machine speed. Confidence can be simulated at industrial scale. An agent can build a relationship, sustain a persona, generate the signals of legitimacy, and execute an attack — all without a human adversary ever being present in the interaction.

What once required months of human operation — building personas, sustaining relationships, manufacturing legitimacy — can now be deployed at machine speed, at scale, against multiple targets simultaneously.

This is what Botsman's framework of distributed trust looks like when it has been weaponised at machine speed. The bridge she describes still exists. But the terrain on either side is now being engineered by adversaries operating at a scale and sophistication that the original framework wasn't designed to anticipate.

What This Means For Leaders

If trust is the mechanism that enables your organisation to function — in customer relationships, partner ecosystems, internal decision-making, regulatory relationships — then the security of trust is not a soft concern. It is a strategic one.

What I've observed across the organisations I work with is that trust attacks don't succeed because the technology fails. They succeed because the culture doesn't create the conditions for early warning signals to travel fast enough.

The deepfake on the video call felt legitimate. The fake trading firm felt like a genuine partner. The North Korean IT worker planted at KnowBe4 — one of the world's most security-aware organisations, whose entire business is built on human risk management — passed every HR check before being discovered.

Each failure happened at a different layer — but in every one of them the attack succeeded not by breaking through technology but by exploiting the human systems around it — perceptual trust, relationship trust, identity trust. And in the cases where the signal existed but went unconnected — what was missing wasn't technology. It was the culture that would have surfaced it.

This is Botsman's insight applied to an adversarial context. Trust is behavioural. Which means protecting trust requires governing behaviour — not just technology. It requires organisations where people feel safe saying "something doesn't feel right here" in the moment, under pressure, before the automated process executes and the money moves. It requires leadership that has made clear what decisions demand human judgement. It requires collaboration across the functions that watch different signals — because trust attacks are almost never visible from a single vantage point.

And it requires understanding that there is a specific moment — when an organisation can no longer determine what is real fast enough to respond — when trust collapses.

Not gradually.

Suddenly.

In my next blog I'll be exploring exactly how that collapse happens — and specifically how the trust and signals between departments become the vulnerability that attackers exploit.

Now I Want To Hear From You

Has Rachel Botsman's work shaped how you think about trust in your organisation? And where do you see the gap between trust as an enabler and trust as a vulnerability? Join me on LinkedIn and drop your thoughts in the comments.

Did you enjoy this blog? Search for more blogs that you want to read!

Jane frankland

 

Jane Frankland MBE is an author, board advisor, and cybersecurity thought leader, working with top brands and governments. A trailblazer in the field, she founded a global hacking firm in the 90s and served as Managing Director at Accenture. Jane's contributions over two decades have been pivotal in launching key security initiatives such as CREST, Cyber Essentials and Women4Cyber. Renowned for her commitment to gender diversity, she authored the bestselling book "IN Security" and has provided $800,000 in scholarships to hundreds of women. Through her company KnewStart, and other initiatives she leads, she is committed to making the world safer, happier, and more prosperous.

Follow me

related posts:

We Are No Longer Defending Systems. We Are Defending Reality.

There’s a moment – quiet, almost imperceptible – when trust fractures. Not with a bang. With a whisper. A voice note that sounds exactly like your CEO. A video indistinguishable from reality. An instruction that feels urgent, plausible, and just unusual enough to bypass instinct, but not unusual enough to stop you. This is the

Read More

What the Stage Taught Me About Cyber Resilience & Leadership

I’ve delivered hundreds of talks over the years—keynotes, panels, fireside chats—and each one has taught me something new. Sometimes the lessons are technical: a clicker that doesn’t work, a monitor too far to read, a microphone that picks up every background conversation. Other times, they’re deeply human: managing nerves, reading a room, staying composed when

Read More

Get in touch