Former Pakistan PM Imran Khan Uses AI Voice Clone to Deliver Campaign Speech from Prison

Former Pakistan PM Imran Khan Uses AI Voice Clone to Deliver Campaign Speech from Prison
Imran Khan in Lahore in May. Image Credit: Mohsin Raza/Reuters

From his jail cell in Pakistan, former Prime Minister Imran Khan is finding innovative ways to skirt suppression and get his message out. His party Pakistan Tehreek-e-Insaf (PTI) recently deployed an AI-generated “voice clone” to deliver a campaign speech on Khan's behalf.

The voice mimicking Khan spoke for four minutes during a “virtual rally” live-streamed on social media. The digital gathering attracted over 4.5 million views despite the government's attempts to censor Khan. PTI claims the jailed leader provided a shorthand script that was workshopped into his signature oratory style. The text was then converted into a Khan-esque audio using AI tools from ElevenLabs designed to clone speech patterns.

However, ElevenLabs verification typically requires users to complete a time-locked voice captcha to match recordings to uploaded voice data. It remains unclear how PTI circumvented this critical safeguard while Khan remains imprisoned. We have reached out to ElevenLabs for comment and will update this article accordingly.

“My fellow Pakistanis...my determination for real freedom is very strong,” the AI voice said, while historic footage and images of the former cricket star were shown.

According to PTI, genuine video clips were blended with the AI audio to enhance believability. Captions also periodically flagged that listeners were hearing an “AI voice of Imran Khan based on his notes."

The speech received mixed reactions. While some lauded the technological ingenuity, others pointed out the subtle but noticeable differences in grammar and delivery, highlighting the limitations of AI in replicating the nuances of human speech.

Khan has been imprisoned since August on charges of leaking classified documents – allegations he contends were manufactured by nervous incumbent powers. Elections are scheduled for February 8, 2024.

This story however should be viewed in much broader context. Next year marks a historical moment in global politics, with over 40 countries—representing more than 40% of the world's population and a significant portion of its GDP—preparing for national elections. It is likely that we will also witness the most widespread use of AI to influence campaigns and voters.

Of course, the use of AI in elections is not new. But the scale and sophistication of these technologies continues to advance at a rate for which we as society are wholly unprepared.

Putin Questioned By Deepfake AI Version of Himself on Live Phone-In Event
The incident not only highlights the growing sophistication of deepfake technology but also prompts a broader reflection on the ethical and security challenges that lie ahead as world leaders navigate this new digital frontier.

The implications of unregulated generative AI in elections are manifold. Firstly, there's the challenge of authenticity. AI can create hyper-realistic representations of candidates saying or doing things they never actually said or did. This poses a direct threat to the integrity of information, a cornerstone of fair elections. In a world where seeing is no longer believing, voters may find it increasingly difficult to discern truth from fabrication, leading to a breakdown in trust and a potential increase in political polarization.

Secondly, combined with social media, AI can be used to micro-target voters and exacerbate echo chambers and filter bubbles. Tailored content, designed to resonate with individual biases and preferences, can skew the democratic process by creating uneven playing fields and manipulating voter perceptions.

And of course, the potential for foreign interference in elections through AI-generated disinformation campaigns is a pressing concern. Such tactics can be employed to destabilize political landscapes, sow discord, and influence election outcomes, often with little traceability and accountability.

The varied political contexts of the nations heading to the polls in 2024 add another layer of complexity. In countries with more authoritarian regimes or those experiencing internal turmoil, the unchecked use of AI could be a tool for government propaganda or suppression of dissent. Conversely, in more democratic societies, the challenge lies in balancing the benefits of AI in engaging voters and enhancing the democratic process with the risks of misinformation and voter manipulation.

The need for robust regulatory frameworks governing the use of AI in elections is evident. Transparency in AI algorithms, ethical guidelines for AI-generated content, and international cooperation in monitoring and mitigating AI-related electoral interference are crucial steps toward safeguarding the democratic process.

Yet, with the sweeping 2024 electoral slate fast approaching, the window for comprehensive reforms feels perilously slim at this late hour. As nations plunge into unpredictable campaigns, voters now face information deserts with mirages filled with deceptive fiction and digital shades of gray.

Let’s stay in touch. Get the latest AI news from Maginative in your inbox.

Subscribe