In an effort to combat a rising tide of phone scams empowered by AI, the Federal Communications Commission on Thursday unanimously approved a ban on unwanted robocalls made using A.I.-generated voices effective immediately.
The decision aims to curb an emerging tactic used by fraudsters and scammers who leverage machine learning to mimic voices and personas. In some cases, these A.I. tools have been used to impersonate celebrities or family members in attempts to steal money or sensitive information from victims.
By classifying these calls as "artificial" under the Telephone Consumer Protection Act (TCPA), the FCC intends to give state attorney generals more leverage to prosecute creators of robocall scams utilizing voice cloning. The TCPA restricts automatic dialing systems and prerecorded messages, requiring telemarketers to get consent before contacting consumers.
The FCC's move follows a notorious incident last November when an AI-powered robocall that convincingly mimicked President Biden's voice was used to tell New Hampshire residents to stay home on Election Day. While the call did not ultimately suppress turnout, it raised alarms about the potential for voice cloning tools to spread misinformation.
In addition to state-level enforcement, the FCC vows to leverage its own civil penalties and technical capabilities to block robocall traffic at the source. An intergovernmental task force on illegal robocalls coordinates enforcement efforts nationwide.
With voice cloning technology advancing rapidly, consumer advocates have warned of its potential for fraud even as it promises benefits in other applications. By drawing a line in the sand on AI-powered phone harassment, regulators hope to curb this emerging scheme before it becomes widespread.