President Biden Audio Generative AI Deepfakes Emerge in Election Disinformation
Generative AI is already being weaponized as a disinformation tool in the U.S. Presidential Primaries. (image credit: Getty)

Hot off the announcement that companies such as OpenAI have pledged to ban their generative AI and artificial intelligence solutions from use in global election campaigns comes news that audio deepfakes of U.S. President Joe Biden were used to target voters in New Hampshire’s presidential primary. The audio deepfake of President Biden was robocalling voters to encourage them not to vote, according to reports.

The generative AI tool used and who is responsible for creating the robocall campaign is currently unknown.

The New Hampshire Department of Justice released the following statement of the incident:

“The Attorney General’s Office has received complaints regarding a recorded message encouraging voters not to vote in the January 23, 2024, New Hampshire Presidential Primary Election. The message, which was sent on January 21, 2024, stated “Voting this Tuesday only enables the Republicans in their quest to elect Donald Trump again. Your vote makes a difference in November, not this Tuesday. It’s important that you save your vote for the November election.” Although the voice in the robocall sounds like the voice of President Biden, this message appears to be artificially generated based on initial indications.”

“your vote makes a difference in November, not this Tuesday.”

Deepfake voice of U.S. President Biden, created using an unknown generative AI tool, in a robocall campaign to New Hampshire Presidential Primary voters.

The messages were deemed unlawful as voter suppression.

The New Hampshire DOJ requests that anyone who received the robocall email the DOJ Election Law Unit with instructions on the DOJ website.

While President Biden isn’t on the New Hampshire election ballot, it appears that whoever coordinated the deepfake robocall had intentions to harm Biden’s election reputation.


Listen to the deepfake audio robocall that New Hampshire voters were bombarded with where a generative AI tool was used to create a fake statement by U.S. President Biden discouraging eligible voters to participate in the New Hampshire Presidential Primary.

Experts: More Deepfakes of political candidates on the way using Generative AI

This form of using cheap, easily accessible generative AI tools is precisely what many experts and OpenAI have warned and attempted to safeguard us from. Yet, here we are.

Indeed, trying to control and regulate a generative AI tool – whether for audio, image, or text generation – is a bit like trying to catch the horse after it leaves the barn. Many safeguards can be engineered to establish guardrails for apparent violations of generative AI policies tech companies form. Still, users will pivot to other tools with lax policies (or no policies) or find ways to jailbreak the tool.

While President Biden’s audio deepfakes raised eyebrows given the significance of the U.S. Presidential primaries, it is hardly the first time it’s been done.

In 2023, synthetic audio samples were used to influence politics and elections in the U.K., India, Nigeria, Sudan, Ethiopia, and Slovakia, according to the Financial Times.


Discover more from Cybersecurity Careers Blog

Subscribe to get the latest posts sent to your email.