.st0{fill:#FFFFFF;}

Beyond Humanity: Why Verifying You’re Real Isn’t Enough Anymore 

 September 4, 2025

By  Jane Frankland

It was only a matter of time.

When the phone call came, it felt like a simple misunderstanding. A trusted colleague asked about an urgent opportunity I’d called to discuss. My voice had convinced them, my cadence had reassured them, and my specific phrasing had sealed the deal.

The only problem?

I’d never made the call.

That voicemail, perfectly mimicking me, was not from me. It wasn’t an over-the-top phishing email or a lazy impersonation attempt. It was AI-driven and disturbingly believable. What shook me most was imagining how this could have escalated without their vigilance. What if that call had gone to a client? A board member? A journalist?

The moment brought one thing into sharp focus for me: the dangers posed by deepfake technology are no longer futuristic conjectures. They’re here, they’re evolving, and they’re more sophisticated than we’re prepared for.

But what’s worse? We’re asking the wrong questions – solving the wrong problem.

You see, the conversation right now focuses heavily on verifying a person is human. But in an age of AI-authored deception, proving someone is human only scratches the surface of what’s truly needed.

This is what I’m exploring in this blog. I’m approaching this through the lens of my role with OutThink, the Cybersecurity Human Risk Management platform I proudly represent as an advisor and brand ambassador.

The Oversight in Identity Verification

One of the most talked-about advancements in digital identity is World ID, a concept introduced by Tools for Humanity, which I explored recently in my blog, ‘Deepfakes, Scams, and the AI Conundrum: Navigating Legal and Ethical Challenges’

Using cutting-edge iris scanning, World ID connects biological uniqueness to an ID, confirming you’re a living, breathing individual. While impressive in technology and scope, World ID verifies that you are human—but not that you’re you.

Here’s the critical gap in this approach. AI-generated identities can simulate more than a generic ‘person’; they can simulate you. They can adopt your voice, your face, even your behaviour, all while targeting reputations, decisions, and relationships in your name.

To illustrate this, ask yourself:

  • When a CEO authorizes millions in funding, how do you know it’s really them behind the screen?
  • When a “colleague” requests sensitive data in a video call, can you trust it’s not an AI-crafted replica?
  • If you’ve agreed to share data or your likeness with an AI tool once, to what degree can that tool replicate you without explicit updated consent?

Proving humanity through biological markers is a step forward, but it’s insufficient to safeguard identity in a world increasingly blurred by technological mimicry.

Where the Risks Multiply

For executives, cybersecurity officers, and decision-makers, the risks tied to identity manipulation via advanced AI are more than hypothetical. They’re lived realities posing pressing challenges.

CEOs and Decision-Maker Exploitation

Deepfake video and audio weaponize trust. Imagine this scenario: a fraudster clones a CEO’s voice and tone, calling a CFO to urgently approve funds for a “time-sensitive” deal. Without sufficient identity verification systems in place, the damage can be irreversible before anyone realizes the truth.

The Irreplaceable Value of Trust

Relationships fuel business. Trust—not just in systems but in people—is foundational. Yet, constant exposure to AI-enabled deception risks eroding this trust, leaving employees questioning leadership decisions and clients second-guessing interactions.

Consent and Its Complexities in the AI Era

Consenting once to train or collaborate with AI doesn’t mean you’ve given unending permission. The lines between what is dismissed as convenience and what breaches ethical boundaries are becoming worryingly thin.

Some professional circles are leaning into AI to amplify reach and scalability. Politicians use voice cloning technology to deliver speeches globally in multiple languages. Models engage digital replicas for showcasing designs virtually. The pace of adoption is remarkable, but so are the stakes. Who owns the likeness? What governs fair use? And how do we separate ethical forward motion from dangerous exploitation?

Flaws in Traditional Identity Security

Historically, cybersecurity protocols like passwords, two-factor authentication (2FA), and even biometrics have protected digital access points. But against the creativity and adaptability of recent AI systems, static and narrowly scoped methods now fall painfully short.

Here’s why current security falls behind:

  • AI evolves faster than static systems.  Deepfake capabilities improve continuously. AI attackers refine their craft with each barrier they encounter, while static verification methods remain slow to adapt.
  • Identity is relational—not singular. Knowing who isn’t enough. The context of relationships, authority, and accountability matters more than raw identity verification. Without accounting for these, traditional security overlooks significant vulnerabilities.

The Call to Action for Businesses and Leaders

If there’s one takeaway for C-level executives and cybersecurity leaders, it’s this: conventional identity protocols are no longer enough to combat the evolving threat landscape.

Here are four actions leaders must prioritize as they rethink identity in an AI-dominated age:

1. Reimagine Identity through Multi-Dimensional Verification

Adopt dynamic verification systems, integrating voice, behavioural patterns, and relationship mapping for contextual validation. This multi-layered approach provides richer safeguards against AI-driven identity manipulation.

2. Elevate Workforce Awareness

Human error is the weak point cyber attackers often exploit. Cybersecurity education, specifically around recognizing AI-enabled deception, should be a regular and required component of organizational training programs.

3. Advocate for Consent Transparency

Legal compliance alone isn’t enough. Organizations must implement systems that make consent explicit, specific, and auditable in real-time for cases involving likeness or data-sharing.

4. Choose Partners Committed to AI Safeguards

Collaborate only with vendors who demonstrate transparency and ethical innovation in their AI development. Your security is only as strong as the systems you depend on.

Looking Ahead – Beyond Just “Proving Realness”

The deepfake era is forcing us beyond the basic question of “Are you human?” Now, we must ask, “Are you truly you, giving consent?” That reframe changes everything, from trust in digital transactions to the ethical boundaries we establish around AI utilization.

Leaders who ignore these questions run the risk of playing permanent catch-up in the arms race against AI mimicry. By contrast, leaders who address these challenges now will lay the foundation for their organizations to distinguish themselves—not just as secure but as champions of clarity, accountability, and trust.

Now I Want to Hear from You

As AI continues to blur the lines of identity and trust, what steps do you think businesses and leaders should take to ensure authenticity and safeguard relationships in this new era? Share your thoughts with me on LinkedIn


About OutThink

OutThink, which empowers businesses with smarter, human-centric risk management practices, helping you fight data breaches, enhance cybersecurity protocols, and restore trust in digital interactions. If you’re ready to strengthen your organization’s defences take the next step:

Did you enjoy this blog? Search for more blogs that you want to read!

Jane frankland

 

Jane Frankland MBE is an author, board advisor, and cybersecurity thought leader, working with top brands and governments. A trailblazer in the field, she founded a global hacking firm in the 90s and served as Managing Director at Accenture. Jane's contributions over two decades have been pivotal in launching key security initiatives such as CREST, Cyber Essentials and Women4Cyber. Renowned for her commitment to gender diversity, she authored the bestselling book "IN Security" and has provided $800,000 in scholarships to hundreds of women. Through her company KnewStart, and other initiatives she leads, she is committed to making the world safer, happier, and more prosperous.

Follow me

related posts:

Leave a Reply:

Your email address will not be published. Required fields are marked

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

Get in touch