AI experts are sounding the alarm on tech ahead of the 2024 election: ‘We’re not prepared for this’

Cutting-edge AI experts and political scientists are sounding the alarm about the unregulated use of AI tools as an election season approaches.

Generative AI can not only quickly produce targeted campaign emails, texts or videos, but it could also be used to mislead voters, impersonate candidates and undermine elections on a scale and scale. speed never before seen.

Election disinformation on artificial intelligence

A booth is ready for a voter, Feb. 24, 2020, at City Hall in Cambridge, Mass., on the first morning of early voting in the state. (AP Photo/Elise Amendola)

“We are not prepared for this,” warned AJ Nash, vice president of intelligence at cybersecurity firm ZeroFox. “For me, the big leap forward is in the audio and video capabilities that have emerged. When you can do it on a large scale and distribute it on social platforms, well, it’s going to have a major impact.

Among the many capabilities of AI, here are a few that will have important ramifications for elections and voting: automated robocall messages, in a candidate’s voice, asking voters to vote on the wrong date ; audio recordings of a candidate allegedly confessing to a crime or expressing racist views; video footage showing someone giving a speech or an interview they never gave.

Fake images designed to look like local news reports, falsely claiming that a candidate has dropped out of the race.

AI EXPERT CALLS ON UN OFFICIALS TO LEARN HOW TO CREATE A GLOBAL AI REGULATORY BODY

“What if Elon Musk calls you personally and tells you to vote for a certain candidate?” said Oren Etzioni, founding CEO of the Allen Institute for AI, who stepped down last year to launch the nonprofit AI2. “A lot of people would listen. But it’s not him.”

Petko Stoyanov, global chief technology officer at Forcepoint, an Austin, Texas-based cybersecurity firm, predicted that groups seeking to meddle with American democracy will use AI and synthetic media to erode trust.

“What happens if an international entity – a cybercriminal or a nation-state – impersonates someone? What is the impact ? Do we have any recourse? said Stoyanov. “We’re going to see a lot more misinformation from international sources.”

AI-generated political misinformation has already gone viral online ahead of the 2024 election, from a doctored video of Biden appearing to deliver a speech attacking transgender people to AI-generated footage of children supposedly learning Satanism in libraries.

Milken Institute

Panelists talk about artificial intelligence at the Milken Institute Global Conference. (Milken Institute)

AI footage appearing to show Trump’s mug shot has also fooled some social media users, even though the former president didn’t take one when he was arrested and arraigned in Manhattan criminal court for falsification of commercial documents. Other AI-generated images showed Trump resisting arrest, though their creator was quick to acknowledge their origin.

Rep. Yvette Clarke, DN.Y., introduced legislation that would require candidates to label campaign ads created with AI. Clark also sponsored legislation that would require anyone creating synthetic images to add a watermark indicating the fact.

Some states have offered their own proposals to address concerns about deepfakes.

Clarke said her biggest fear is that generative AI could be used ahead of the 2024 election to create video or sound that incites violence and pits Americans against each other.

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

“It’s important that we keep up with the technology,” Clarke told The Associated Press. “We have to put up guardrails. People can be tricked, and it only takes a fraction of a second. People are busy with their lives and they don’t have time to check every piece of information. With AI being weaponised, in a political season, this could be extremely disruptive.

The Associated Press contributed to this report.

Leave a Comment