.st0{fill:#FFFFFF;}

We Are No Longer Defending Systems. We Are Defending Reality. 

 April 3, 2026

By  Jane Frankland

There’s a moment – quiet, almost imperceptible – when trust fractures.

Not with a bang. With a whisper.

A voice note that sounds exactly like your CEO. A video indistinguishable from reality. An instruction that feels urgent, plausible, and just unusual enough to bypass instinct, but not unusual enough to stop you.

This is the era we’ve entered. And in my latest keynote, I’ve started calling it, very deliberately “the Age of Influence.”

Because what we’re witnessing isn’t just a technological shift. It’s a power shift. The ability to shape perception, manipulate belief, and influence decision-making at scale has never been more accessible. And in the wrong hands, it is extraordinarily potent.

From Systems To Minds

Cyber attacks have traditionally been about money. Ransomware. Fraud. Extortion. Clear objectives, measurable outcomes.

But increasingly, they are about minds.

Deepfakes are no longer a novelty. They’re not a distant threat debated at conferences. They are here – operationalised, and woven into espionage, fraud, and influence campaigns at scale.

What concerns me most isn’t their sophistication. It’s our collective overconfidence in our ability to spot them.

For years, we’ve told people: Look closely. You’ll notice something off. Trust your instinct. Use your intuition.

But what happens when there is nothing obviously off?

In early 2024, a finance employee at Arup, one of the world’s most respected engineering firms, joined what appeared to be a routine video conference with the company’s CFO and several senior colleagues. He authorised fifteen transactions totalling $25 million. Every person on that call was a deepfake. Every face, every voice, entirely synthetic. The fraud was only discovered when he checked with head office afterwards.

No malware. No exploit. No phishing link.

Just trust, weaponised.

And this is what makes the Arup case so important – the employee did what we tell people to do. He verified. He asked to see faces. He looked for the signs. There were none.

The Correction Imperative

This is where the concept of correction becomes critical – not as a reactive afterthought, but as a strategic capability.

In intelligence circles, correction isn’t about fixing errors after the fact. It’s about continuously validating reality. Asking,

“Is this true? How do we know? What if it isn’t?”

That mindset needs to be embedded far more deeply across our organisations.

Popular culture has been circling this for some time. The BBC series The Capture imagines a world where video evidence can be manipulated in real time – where “correction” becomes a tool not to reveal truth, but to reshape it. Yes, it’s fiction. Yes, it’s fun viewing, but here’s the thing. Right now, it’s uncomfortably close to the direction of travel.

If we can already generate convincing synthetic voices and video – entire boardrooms of fabricated executives – what comes next isn’t a leap. It’s an iteration. Real-time manipulation. Context-aware deception. Evidence that adapts as it spreads.

And that leads us to something more profound – what I’d describe as a potential truth collapse moment.

A point where real content is dismissed as fake. Fake content is accepted as real. And the average person, including the average employee, no longer knows what to trust.

We are already seeing the early signs. The existence of deepfakes creates what’s widely called the “liar’s dividend.” When anything can be faked, everything becomes deniable.

Trust doesn’t just erode. It fragments.

The Geopolitical Dimension

In the Age of Influence, this doesn’t just impact organisations. It impacts societies.

In late 2023, days before Slovakia’s general election, an audio clip began circulating. It appeared to be a conversation between a journalist and the leader of the liberal Progressive Slovakia party, apparently discussing how to rig the vote. The clip was AI-generated. Progressive Slovakia went on to suffer an upset defeat. Whether the deepfake changed the outcome we cannot know for certain. What we do know is that it didn’t need to be real to do damage. It only needed to be believed long enough.

That is the nature of influence operations at a geopolitical level. They’re not designed to control societies outright. They’re designed to weaken them – to erode trust, amplify division, create uncertainty about what is real. And in that uncertainty, influence becomes far easier to exert.

This is not about any particular ideology or political direction. It is about destabilisation – creating environments where consensus is harder, trust is lower, and decision-making fractures.

Beyond Fraud: The Competitive Threat

It would be a mistake to think of this as purely a criminal problem. Or purely a nation-state problem.

It is also a competitive problem.

In July 2024, a Ferrari executive began receiving WhatsApp messages apparently from CEO Benedetto Vigna. The messages described a significant acquisition in progress, invoked regulatory authority – claiming Italy’s market regulator and the Milan stock exchange had already been informed – and urged the executive to prepare to sign an NDA immediately. When a call followed, the voice was a convincing replication of Vigna’s distinctive southern Italian accent.

The executive grew suspicious. He asked the caller to name a book Vigna had recommended to him days earlier. The caller couldn’t answer. The call ended abruptly.

Ferrari escaped. But consider what the attacker was actually constructing: not just a payment instruction, but an entire strategic scenario. A fabricated M&A deal. Regulatory cover. Urgency. Secrecy. Had it worked, the attacker wouldn’t only have moved money. They would have extracted intelligence about Ferrari’s deal pipeline from a senior executive who believed they were speaking to their CEO.

This is the dimension of the deepfake threat that doesn’t get enough attention in security conversations. The adversaries we face are not only criminals and nation-states. They are anyone with a competitive interest in what your organisation knows – your strategy, your partnerships, your pipeline, your people.

M&A activity. Regulatory negotiations. Pricing decisions. Board deliberations. All of it is potentially in scope. And the entry point isn’t a vulnerability in your systems. It’s a convincing voice on a call, at the right moment, with the right story.

When the institution itself becomes the target

The threat isn’t only about deceiving people. Increasingly, it’s about deceiving systems.

Earlier this year, a single individual in Amsterdam opened 46 bank accounts at ABN AMRO using deepfake technology. He obtained stolen identity documents, then used AI to overlay his own facial features onto the passport photos, bypassing the bank’s automated onboarding verification. The fraud wasn’t discovered by the detection system. It came to light by accident—when one application used a woman’s ID but the selfie retained identifiably male features. ABN AMRO investigated and found the full scale of what had happened.

This is the detail that should concern every financial services leader reading this: the system didn’t catch it. Chance did.

And here is what makes it strategically significant beyond the immediate fraud. When a deepfake passes onboarding, the attacker doesn’t break into the system. They become a verified customer. From that moment, the institution unknowingly provides the infrastructure – the accounts, the cards, the credibility—for whatever comes next.

We tend to think of deepfake attacks as external intrusions. This case reframes the threat entirely. The attacker was inside, by invitation, from day one.

What Leadership Looks Like Now

We are no longer just defending systems. We are defending reality. And to do that, we first need to be honest about something.

We have spent years telling people what to look for. Spelling mistakes. Suspicious links. Unusual requests. That guidance was practical, easy to communicate, and it worked. Sometimes.

But it treated behaviour as an individual characteristic – something a person either had or didn’t. It ignored the reality that how people act is shaped by the situation they’re in: how much time they have, how the tools around them are designed, what signals they receive about what’s important, and whether the culture they work in makes it feel safe to slow down and question. When we focus only on the moment of decision, we don’t just simplify the problem – we risk misunderstanding it entirely.

The Arup case proves that assumption wrong, but not in the way we might hope. The employee followed every instinct we trained into him. The problem wasn’t his behaviour. It was that the environment around him wasn’t designed to catch what he couldn’t.

The Ferrari case proves something else: that resilience isn’t about spotting the mechanism of the attack. It’s about developing the human qualities that hold across all kinds of attacks – critical thinking, the emotional intelligence to sense when something feels wrong, and the confidence to question even when everything appears legitimate. The executive who stopped that call didn’t do so because a training module told him to. He did so because something in him paused, and the culture around him made pausing feel like the right thing to do.

Those qualities don’t emerge from awareness programmes alone. They emerge from culture. And culture is built deliberately, or not at all.

That means four things, in my view:

First: leadership has to own this. Not delegate it. The deepfake threat sits at the intersection of technology, culture, and organisational design, and that intersection is the CEO and board’s territory, not just the CISO’s. How an organisation responds to this threat is a direct reflection of the values and priorities set at the top. If leaders treat verification as friction and speed as the ultimate virtue, that signal travels fast and far.

Second: governance must make verification non-negotiable. Verification loops, out-of-band confirmation, multi-person authorisation for high-value transactions, pre-agreed code words for sensitive conversations: these aren’t bureaucracy. They’re infrastructure. And infrastructure requires governance  –  clear ownership, defined standards, regular testing, and board-level visibility. The same applies to onboarding. If your identity verification relies on a single automated check, you are not verifying identity. You are verifying a file. That is a governance failure before it is a technology failure.

Third: build cultures where people are expected to question. Not cultures of suspicion, but cultures of disciplined curiosity. Where escalation is rewarded, not discouraged. Where bad news travels fast and is never suppressed, softened, or quietly buried. In a world of deepfakes and deception, an environment where people feel they cannot pause or push back is not just a cultural problem. It is a security vulnerability.

Fourth: use technology to reinforce human judgement—not replace it. Detection tools, verification systems, and intelligent controls all have a role. But when we automate oversight without designing for human understanding of what the system is doing and why, we create new blind spots. The goal is not to remove humans from the loop. It is to ensure the loop is designed so that humans within it can actually see, question and act.

Culture and technology alone are not enough. Leadership sets the tone. Governance makes it stick. And the relationship between all of them is where security either holds or fails.

No Organisation Can Solve This Alone

There is one more dimension that our security conversations too rarely address –  collaboration.

The adversaries behind these attacks share intelligence, refine their techniques, and learn from each other’s successes. The question is whether we do the same.

Cross-sector threat intelligence sharing, industry-wide verification standards, coordinated incident disclosure  – these are not nice-to-haves. In a threat landscape where a technique proven against a bank in Hong Kong is adapted and deployed against an engineering firm, a car manufacturer, and a political party within months, isolated defence is not defence. It is delay.

The organisations, and sectors, that will prove most resilient in the Age of Influence will be those that treat collaboration not as a vulnerability but as a strategic asset. That means sharing what we know, when we know it, with the people who need it most.

Our adversaries are not working alone. We cannot afford to either.

What We Can Trust

Not our eyes. Not our ears. Not urgency, not authority, not the plausible instruction that arrives at the right moment.

What we can trust are processes. Verification loops. Well-designed friction. And organisations willing to say: pause, prove it, then proceed.

We are entering a period where seeing is no longer believing. Hearing is no longer knowing. And in some cases, the verified customer sitting inside your systems may not be who you think they are.

But that does not make us powerless.

It makes us more responsible.

The organisations that will thrive in the Age of Influence won’t be the ones that trust the fastest.

They’ll be the ones that verify the smartest.

To End

I work with cybersecurity and executive leaders navigating exactly these challenges. The question I keep returning to, and that I’d put to you now is this:

Where in your organisation does the pressure to act fast override the instinct to verify?

That gap is where adversaries live.

I’d genuinely like to hear where you’re finding it hardest to close it. Head on over to LinkedIn and join in on the conversation.


Did you enjoy this blog? Search for more blogs that you want to read!

Jane frankland

 

Jane Frankland MBE is an author, board advisor, and cybersecurity thought leader, working with top brands and governments. A trailblazer in the field, she founded a global hacking firm in the 90s and served as Managing Director at Accenture. Jane's contributions over two decades have been pivotal in launching key security initiatives such as CREST, Cyber Essentials and Women4Cyber. Renowned for her commitment to gender diversity, she authored the bestselling book "IN Security" and has provided $800,000 in scholarships to hundreds of women. Through her company KnewStart, and other initiatives she leads, she is committed to making the world safer, happier, and more prosperous.

Follow me

related posts:

Leave a Reply:

Your email address will not be published. Required fields are marked

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

Get in touch