Last month prospective voters across New Hampshire were confronted with a fake Joe Biden robocall urging them not to vote in the state’s presidential primary. The call led to renewed speculation that malicious actors were already busy sowing confusion about the 2024 election.
Those fears initially focused on Life Corp., a Texas-based company operating out of a strip mall. The truth may be even weirder, because a New Orleans-based magician now claims to be the source of the bogus Biden call. Even more concerning is that the magician, Paul Carpenter, claims that senior advisers to a rival presidential candidate, Rep. Dean Phillips, D-Minn., financed the whole shady operation.
It’s a sobering reminder that the cost of producing misinformation is plummeting as technology improves.
According to Carpenter, Phillips adviser Steve Kramer paid only $150 for the initial recording. It’s a sobering reminder that the cost of producing misinformation is plummeting as technology improves. After years of federal efforts to protect America’s democratic process from foreign interference, policymakers now face another challenge: a homegrown boom in deepfake production.
Phillips’ team denies it contracted a magician to make Americans’ trust in democracy disappear, but regardless, the scenario wouldn’t be that far outside the norm. That’s a function of both reduced cost and increased user-friendliness. Back in 2019, Ars Technica’s Timothy B. Lee spent two weeks and just over $500 creating a (pretty unconvincing) deepfake of Facebook CEO Mark Zuckerberg. Now a slew of startups, including Deepfakesweb, allow users to create far superior video fakes for just $19 a month. If you’re looking for the kind of audio trickery Phillips’ campaign is accused of distributing, the costs are even lower.
Over the weekend I set out to test that theory by creating an audio deepfake in which Biden recites lyrics from Taylor Swift’s song “You Belong With Me.” My search led me to Danny Siegel, a Columbia University graduate student who studies the security implications of deepfake technology. It took Siegel under an hour to produce a convincing enough audio file trained on just one minute of Biden’s public remarks. (To avoid the potential spread of misinformation, we are not sharing the file here.)
Despite recent efforts by some AI companies to limit the use of prominent voices like Biden’s in potentially misleading ads, Siegel notes that safeguards are still imperfect. “Adding background music significantly reduces detectability,” he said. “This limitation was highlighted during the Biden robocall incident, where the audio was only identified with an 84% probability” by AI firm ElevenLabs.
ElevenLabs banned the use of Biden’s voice after the New Hampshire incident, but Siegel thinks stronger protections are still needed. Even after Biden’s voice was declared a “no-go,” it’s still possible to manipulate the raw audio files enough to create a deepfake on the platform. In the absence of federal action, industry self-regulation remains an imperfect tool for combating deepfake disinformation.
The Biden deepfake that duped voters in New Hampshire wasn’t a sophisticated foreign intelligence operation, nor was it a costly scheme cooked up by the Trump campaign. It was deployed in the same mundane fashion as a standard campaign attack ad. But AI disinformation isn’t your garden variety media buy — it’s attack ads on steroids, with powerful hangover effects on voter trust in elections.








