Fight Fire With Fire To Address Generative AI in 2024 | Opinion

Amid the usual debates over inflation and immigration, a new and growing cloud hangs over America's next presidential election: generative artificial intelligence (AI).

For the first time in U.S. history, Americans are broadly concerned about AI's potentially adverse impact on the 2024 race. Over half of U.S. voters believe AI-related misinformation will affect the presidential election outcome in 2024, with millions of Democrats and Republicans expressing similar concerns. It is no surprise that Senate Majority Leader Chuck Schumer hosted an AI Insight Forum earlier this month, with elected officials and business leaders now discussing federal legislation to regulate AI.

Corporate America is taking notice, with Silicon Valley actively pursuing the disclosure of so-called synthetic content. Google recently required political advertisers to disclose the use of AI in audio clips and images.

As Americans wade into the deep end of the 2024 election cycle, we are navigating uncharted waters. AI-generated content can have real-world and real-time consequences, which were all too apparent when a fake picture of a Pentagon explosion shook social media and the stock market this past May. What if Joe Biden or Donald Trump is the victim next time around? Or, worse yet, the culprit?

People are right to wonder, but these are the real questions to ask: Are there common-sense practices or policies that will help us accurately sift through a flood of AI-generated material, and discern what is real or not? And who is responsible to enforce such policies?

It is not difficult to imagine the following scenario: Candidate A pulls away in the polls and looks like a shoe-in. Candidate B's team generates a scathing, but fake, piece of AI-generated content, just before ballots drop. The salacious content goes viral and severely hampers Candidate A's electability. A week or two after the election has been called for Candidate B, it comes out that the whole scandal was fake and Candidate A was wrongly cheated out of an election.

This is perhaps an even likelier scenario: A piece of compromising information comes out about Candidate A, but the accusation is denied, claiming the candidate is the victim of an AI-generated scheme. Candidate A goes on to win, but when it comes out later that the accusation was actually true, Americans have all moved on to more important items, such as Netflix making Suits popular again.

In terms of regulation, many solutions will be retroactive, after a malicious act of AI usage has occurred and presumably caused significant damage. A proactive approach is more uncertain, but undoubtedly worth considering.

Voting stickers
A volunteer election official holds stickers. Anna Rose Layden/Getty Images

Here's one answer to the AI question: the back-burn solution. One common technique in fighting wildfires is called "back burning," which means starting smaller fires along a man-made or natural firebreak in front of a larger fire front. In practice, the controlled fires eliminate the fuel in the path of a broad wildfire, erecting a barrier in front of an oncoming forest fire as it uses up all available fuel.

In a similar sense, we can fight the spreading AI fire with more fire. If it is possible to train AI to manufacture fake images and audio clips, it is also possible to train AI to identify and flag AI-generated content. There are already tools that can distinguish one form of content from another, making it transparently known to the people consuming the content. At that point, the content can either be viewed or heard with caution, put up for review, or removed (let's say, by Facebook or X).

A centralized database of "known offenders" could also be created and checked to ensure that previous hoaxes don't make the rounds again. This type of solution would solve the short-term issue of reputational damage while also eliminating the likelihood of repeat offenses down the road. The integration of an up-to-the-minute monitoring mechanism against a database of known falsehoods—combining both AI identification and centralization—essentially serves as a digital firebreak. And so, the fire is contained.

Technology companies, for starters, should consider the back-burn solution as our election year nears. After all, who knows generative AI better than generative AI?

Any attempt to stop AI-wielding bad actors in 2024 will be far from complete. Bad actors do exist, and that isn't going away—not now, not ever. There are important free-speech issues to consider, so individual liberties are not needlessly infringed by government regulators and other entities. Any regulation also raises ethical and moral questions regarding generative or even predictive AI's ability to identify "truth," and those questions should be taken seriously. But one thing is clear: This election cycle, we need to be prepared to stop AI-generated misinformation on a broad scale.

The fire-fighting-fire strategy appears to be an efficient, effective approach in the short run, as bigger-picture questions are answered. While AI quietly fights against itself, Americans can all go back to politics as usual, and more episodes of Suits. Who knows: After the writer's strike is settled, perhaps Netflix will give us another season of shocking lawsuits and serious-looking file folders—all AI-free, of course.

Ryan Waite is vice president, creative and advertising strategist for Push Digital Group.

The views expressed in this article are the writer's own.

Uncommon Knowledge

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

About the writer

Ryan Waite


To read how Newsweek uses AI as a newsroom tool, Click here.
Newsweek cover
  • Newsweek magazine delivered to your door
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go
Newsweek cover
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go