Everyone’s talking about AI aren’t they, and when I gave a keynote on Artificial Intelligence and cybersecurity recently, I relayed how the rise of AI has brought us to a pivotal moment in history—a moment brimming with both extraordinary opportunity and unparalleled risk.
Central to this debate, though, is the approach that nations are taking toward regulation. Earlier this year, U.S Vice President, JD Vance, made headlines in Paris with a provocative call for “AI acceleration” through deregulation, positioning it as America’s key to leadership in global innovation.
But behind the rhetoric of “less regulation, more opportunity” lies a stark reality—one that risks destabilising global security and economic systems. For business leaders, particularly those in tech, understanding the implications of such policies is critical—not just for protecting their organisations, but for contributing to a future where technology is wielded ethically and responsibly.
That’s what this blog is all about. In it, I’m going to be examining the myth of having less regulation in regard to innovation, and the negative impact this may have on our planet.
The Myth of “Less Regulation, More Opportunity”
The idea that regulation stifles innovation oversimplifies a much more complex reality. When technological advancements go unchecked, they can have serious consequences.
History offers plenty examples of this, such as the 2008 global financial crisis, fueled by weak oversight in the finance sector, and the unchecked spread of disinformation across social media platforms today.
AI amplifies these risks exponentially. Its potential to reshape economies, influence elections, and disrupt global security makes unregulated development a gamble humanity can’t afford to take. For example:
- AI-driven cyberattacks dismantling critical infrastructure, from financial systems to power grids.
- Deepfakes and automated disinformation campaigns manipulating elections and further eroding public trust in democracy.
- Monopolistic control by unregulated tech giants crushing innovation and concentrating excessive power into the hands of a few corporations.
- Autonomous AI weapons escalating global conflicts with alarming speed and unpredictability.
The USA’s refusal to engage in responsible regulation is not a sign of leadership; it’s sadly a resignation of responsibility.
Lessons from the European Union and Beyond
While the U.S has currently chosen a path of deregulation, other global powers are taking a more measured — and some may argue — responsible approach. The European Union’s AI Act, for example, represents a proactive attempt to mitigate AI risks while ensuring transparency and accountability.
Similarly, over 60 nations, including China, have endorsed an international pledge to ensure AI is developed responsibly, ethically, and transparently.
These measures show that regulation does not stifle innovation—it builds trust, encourages adoption, and creates a more sustainable framework for long-term growth. Ironically, even China, often criticised for its authoritarian AI use, has acknowledged the need for safeguards. Yet the U.S.A, the world’s leading tech innovator, remains an outlier, choosing to prioritise corporate interests over global safety.
Ignoring the Risks of Unchecked AI
It would be so easy to ignore the risks of unchecked AI, but the dangers posed by uncontrolled AI development aren’t hypothetical. Experts from OpenAI, Google DeepMind, and MIT have repeatedly warned of the existential threats posed by irresponsible AI practices.
For example, risk scenarios include:
- Mass Job Displacement: Millions of jobs across industries are vulnerable to automation. Without regulatory measures to ensure a just transition, economic inequality will deepen.
- Autonomous Warfare: The proliferation of unregulated AI weapons systems could destabilise global security and lead to catastrophic conflicts.
- Disinformation at Scale: AI-generated deepfakes and misinformation campaigns threaten to erode democratic norms and societal trust.
- Loss of Human Oversight: Advanced AI systems could outpace human control, leading to potentially irreversible consequences for critical infrastructure and decision-making processes.
The USA’s refusal to act on global AI safety pledges sends a troubling message to the international community and leaves businesses unprotected against these escalating risks.
The Consequences for Businesses and Global Innovation
Unregulated AI development carries dire implications for businesses. CISOs, CIOs, and CTOs, who are already grappling with the complexities of cybersecurity, supply chain vulnerabilities, and data privacy, will face amplified risks without robust AI safeguards. The absence of regulation also creates a highly uneven playing field, forcing businesses to operate in an environment dominated by reckless actors and monopolistic practices.
On a global scale, this deregulation jeopardises international cooperation and stability. If the U.S.A continues down its current path, the world risks becoming a battleground between competing AI superpowers, both prioritising dominance over safety. Meanwhile, efforts by countries like those in the EU to regulate AI responsibly could be undermined if unregulated U.S.A products flood the global market.
Responsible AI Governance Drives Innovation, Not Chaos
The false dilemma that pits regulation against innovation must be dismantled. Responsible AI governance is not an obstacle to progress—it’s a necessary foundation for sustainable growth and innovation. Properly designed regulations can:
- Encourage competition by preventing monopolistic practices and fostering a diverse AI ecosystem.
- Build trust in AI technologies, promoting wider adoption across industries like healthcare, finance, and manufacturing.
- Enhance security by ensuring AI is not weaponized or used to exploit critical infrastructure.
Countries that implement strong AI safeguards are creating environments where businesses can thrive ethically and sustainably, setting the stage for long-term success rather than short-term chaos.
The Role of Tech Leaders in Championing Ethical AI
CISOs, CIOs, and CTOs are uniquely positioned to influence the direction of AI development within their organiSations. By prioritising ethical and responsible AI practices, tech leaders can:
- Advocate for internal policies that align with global AI safety standards.
- Collaborate with industry peers, think tanks, and policymakers to shape the future of ethical AI governance.
- Educate boards and C-suites on the risks and opportunities associated with AI regulation.
To End
JD Vance’s call for AI acceleration through deregulation is not a bold vision of leadership—it’s an invitation to disaster. By rejecting necessary safeguards, the U.S. risks destabilising economies, undermining democracy, and jeopardising global security.
But it’s not too late to change course. Responsible AI governance is not about stifling innovation; it’s about ensuring that AI remains a force for progress, not a weapon of manipulation or control. By adopting strong regulations, fostering international cooperation, and championing ethical AI practices, the U.S. can lead the world into a future where technology serves humanity, not endangers it.
For tech leaders, the choice is clear. The time to act is now—before the accelerating perils of AI become irreversible.
What Next?
Recently, I became an ambassador for the Global Council for Responsible AI (GCRAI) and would love for you to join me.
Headed up by Carmen Marsh it’s an international governing body committed to fostering the development of AI technologies that are ethical, secure, and beneficial for humanity. Its mission is to strengthen the future of AI through education, leadership, certification, and advisory services that place human dignity, innovation, and international cooperation at the core. With a global network of regional chapters, GCRAI supports culturally relevant education, localised ethical frameworks, and impactful policy dialogue informed by regional insight.
Recognising that the ethical use of AI is a shared global responsibility, GCRAI engages policymakers, technologists, civil society leaders, and cultural stewards in shaping a future where AI is safe, fair, inclusive, and beneficial to all. While the work continues to evolve, GCRAI brings together the collective wisdom of experts, innovators, and visionaries dedicated to building a future where technology truly serves humanity.