Despite actions in 20 states, nationwide laws to curb election deepfakes remain elusive.
On the evening of January 21, New Hampshire resident Patricia Gingrich was preparing to eat dinner when her landline rang. The caller’s voice urged her not to vote in the presidential primary.
“As I listened, I thought, wow, that sounds just like Joe Biden,” Gingrich told Al Jazeera. “But what he was saying didn’t make sense — Biden would never tell people to skip voting.”
It wasn’t Biden. It was a deepfake — an AI-generated imitation crafted to deceive.
Experts caution that deepfakes — artificially created audio, video, or imagery meant to mislead — could seriously impact the 2024 US general election, not only by spreading false information but by undermining trust in the democratic process.
Although Gingrich realized it was fake, she worries that others may not have, potentially lowering voter turnout. The robocall reportedly reached nearly 5,000 voters just before the New Hampshire primary.
“This is dangerous for voters who aren’t fully aware of the situation,” said Gingrich, who chairs the Barrington Democratic Committee. “If they truly believed Biden didn’t want them to vote, they might have stayed home.”
Susceptible Online Communities
The Biden deepfake wasn’t an isolated incident. Before suspending his campaign, Florida Governor Ron DeSantis shared an ad featuring AI-generated visuals of Donald Trump hugging Dr. Anthony Fauci — despite their public conflicts during the pandemic.
Similarly, in September, 300 South Carolina Republican voters received a robocall impersonating Senator Lindsey Graham, asking about their voting plans.
Misinformation is nothing new in politics. Even George Washington faced forged letters falsely questioning the American Revolution. But today’s AI tools allow fake content to be created quickly, affordably, and convincingly, intensifying the risks.
A recent study by George Washington University researchers predicted a surge in AI-based disinformation attacks by mid-2024, posing a threat to the election.
Lead author Neil Johnson warned that the real danger lies in realistic and subtle manipulations — not the blatantly fake robocalls — as they can evade detection by fact-checkers.
The study revealed how disinformation can spread through interconnected online groups, pushing manipulated content into the public eye.
Communities in battleground states are especially at risk, as are parenting forums on platforms like Facebook.
“Parenting groups will play a key role,” said Johnson, citing the way vaccine misinformation spread rapidly in those circles during COVID-19.
He added, “We’re going to see content that isn’t outright false but bends the truth just enough to mislead.”
Trust in Institutions at Risk
But it’s not only voters being targeted. Election officials are also in the crosshairs. Larry Norden, senior director at the Brennan Center for Justice, works with these officials to recognize deepfakes.
He noted that AI could be used to impersonate a supervisor’s voice or message, misleading poll workers into closing voting stations early.
His advice? Always verify any instructions received.
While deceptive content isn’t new, AI significantly scales up the problem.
To illustrate the technology’s potential, Norden created a deepfake of himself using old TV interviews.
“It was quick and simple,” he said. While the video wasn’t perfect, it showed how far the technology has come — and how quickly it’s improving.
More concerning than the tech itself is how it affects public perception. As deepfakes become more prevalent, people may start distrusting all media, even genuine content.
This opens the door for politicians to discredit authentic recordings by claiming they’re fake — a concept scholars call the “liar’s dividend.”
Norden pointed to the infamous Access Hollywood tape from 2016. If it were released today, he believes it would be easier for the subject to dismiss it as fake.
“That kind of public doubt is already widespread in the US, and deepfakes may only deepen it,” he warned.
Steps Toward Regulation
Despite growing concerns, there are few national rules against election deepfakes. The Federal Election Commission (FEC) hasn’t issued any regulations yet, and congressional bills remain stuck.
So far, 20 states have passed laws targeting election-related deepfakes, and others — including Hawaii, Louisiana, and New Hampshire — have bills awaiting final approval.
“States are once again taking the lead, acting as laboratories of democracy,” Norden said. “Congress is gridlocked, so states are stepping in.”
Civic groups are also responding. After receiving the Biden deepfake call, Gingrich joined a League of Women Voters lawsuit demanding accountability.
Investigations revealed that political consultant Steve Kramer was behind the calls. He claimed his goal was to spotlight the urgent need for AI regulation in politics.
Kramer also admitted to commissioning the South Carolina robocall, which impersonated Senator Graham.
He used free, publicly available software to generate Biden’s voice in under 20 minutes at a cost of just $1.
Despite the low cost, Kramer claimed the stunt brought $5 million worth of media attention, which he hoped would drive regulatory change.
“I wanted to make an impact,” he told CBS News.
Existing Laws Still Apply
Kramer’s case highlights that current laws can still apply.
The Federal Communications Commission (FCC) recently declared that AI voice clones fall under the Telephone Consumer Protection Act of 1991 — making them generally illegal.
As a result, the FCC proposed a $6 million fine against Kramer.
He also faces state charges in New Hampshire for impersonating a candidate and attempting to suppress votes — both felonies that could result in up to seven years in prison. He has pleaded not guilty and did not comment when contacted by Al Jazeera.
Norden pointed out that none of the charges explicitly mention AI. “The legal violations don’t depend on the technology,” he said. “These laws already existed.”
However, applying those laws to anonymous perpetrators or those operating from outside the US is a much greater challenge.
“Intelligence agencies already see Russia and China experimenting with these tools,” Norden said. “We’re not going to solve this just through legislation.”
Both Norden and Johnson stressed the importance of public awareness. With legal gaps and rapid technological growth, voters must learn how to spot deepfakes and seek reliable information.
As for Gingrich, she’s bracing for more attempts to manipulate voters. Her advice: “Make sure you know your voting rights — and that your voice counts.”