top of page

The Challenge of Trusting AI: Balancing Risks and Control in Communication

  • Zenia Pearl V. Nicolas
  • Oct 8
  • 3 min read

Updated: Nov 5

A worried man in a suit talks on the phone. A digital display reads "1 million euros kidnapped journalists urgent!" with a heartbeat graph.

As voices generated by AI become indistinguishable from real humans, questions intensify over ethics, regulation and the infrastructure fueling this leap.

Late one afternoon in February, several wealthy businessmen in Italy received a shocking call. The voice on the line was unmistakably that of Defence Minister Guido Crosetto, pleading for one million euros to free kidnapped journalists abroad. But, as investigations later revealed, “it was not Crosetto at the end of the line” (Al Jazeera, 2025). That chilling breach marked a turning point: AI is no longer just mimicking human tone, it’s becoming  humanlike in voice, cadence and persuasion. 


In October 2025, as AI-generated speech blurs the line between human and machine, new infrastructure and regulatory moves are intensifying the stakes. To understand what’s at risk, we must examine both the capabilities and the power systems behind them.


From Deepfake Audio to Systemic Threats

Advances in AI-generated voice, sometimes called “deepfake audio,” now allow the creation of ultra-realistic voice overs and sound bytes. According to Al Jazeera, “Indeed, new research has found that AI-generated voices are now indistinguishable from real human voices.” (Al Jazeera, 2025). These technologies are not corner cases. In the Crosetto scam, fraudsters used voice cleaning to convince prominent targets, making the scam more credible. 


The danger is multidimensional. On one hand, these tools can amplify disinformation, impersonate leaders and facilitate financial fraud. On the other, they drive demand for compute, which in turn reshapes the AI arms race and the risk landscape.


Infrastructure Arms Race: OpenAI, AMD and the Compute Surge

The recent multiyear chip-supply deal between OpenAI and AMD underscores how essential raw compute power has become to AI’s evolution. Under this agreement, AMD will provide high-performance AI chips and grant OpenAI the option to acquire up to a 10% stake in AMD. (Reuters, 2025; Al Jazeera, 2025). As reported by Reuters, “We view this deal as certainly transformative, not just for AMD, but for the dynamics of the industry,” said Forrest Norrod, AMD executive vice president. (Reuters, 2025).


This deal is among many that signal a wave of investment into AI infrastructure. As Reuters frames it, companies are “channeling billions into AI infrastructure as demand booms” (Reuters, 2025). The capacity to generate  more realistic voices, images and agents depends on ever-larger, specialized hardware. Without governance that matches this acceleration, risks may outpace control. 


Regulation, Trust and the Human Question

When AI begins to sound human, trust becomes fragile. Who verifies a voice on a call? Can legislation keep pace with model changes that occur monthly? EU leaders are pushing ambitious AI strategies to reduce reliance on U.S. and Chinese tech, stressing sovereignty and safety. (Financial Times, 2025). Meanwhile, groups in the U.S. are debating guardrails for advanced AI systems that blend cognition, action and mimicry.


A recent ethical framework proposed by global AI policy groups suggests a set of “transparency, accountability and human oversight” principles. But translating them into enforceable regulation across borders is a different challenge altogether.


A Human Lens on Synthetic Speech

At its core, the scramble to shape AI’s voice is also about shaping its identity and social contract. If machines speak like us, do they deserve rights or scrutiny? If we cannot trust what we hear, what anchors remain for public discourse?


The Crosetto case was more than a fraud, it was a warning embedded in our daily lives. As AI voices multiply, our defenses must evolve too: verification tools, digital voice signatures, public awareness and policy. The question isn’t just whether AI can speak like us, but whether we can still trust what we hear. 


References


Comments


bottom of page