We Are in the Ex Machina Era | Opinion

Over the past month, millions of people read claims by a Google employee that one of its artificial intelligence systems has become sentient. From a viral post on Medium to a primetime appearance with Fox News' Tucker Carlson, software engineer Blake Lemoine has asserted that an AI called LaMDA is roughly self-aware, like humans and some animals.

This is almost certainly false. LaMDA is a highly sophisticated text-prediction system that generates realistic chat conversations, but it lacks any software machinery for introspection. Its statements about meditating and having emotions cannot reflect reality. But the controversy itself highlights a very real and urgent issue—with enormous implications for public discourse and information warfare.

As the Lemoine incident demonstrates, AI is already lifelike enough to convince a conscientious expert of its sentience—even in a setting where his scientific background and technical understanding of LaMDA's architecture would be expected to push him away from that conclusion. This impression was powerful enough that he took a massive career risk and publicly went against his friends and colleagues.

Conventional wisdom held that this sort of thing could happen once AI passed the Turing test—a formal procedure devised by mathematician Alan Turing in 1950 to assess whether a machine is sentient. In this test, a human judge communicates via instant message with both an AI and a human foil. The AI tries to imitate a human, and the judge tries to ask questions that an AI would struggle to answer convincingly. If the judge can't tell which interlocutor is a computer, the AI is deemed to have passed the test, and demonstrated cognitive personhood as well as could ever be empirically verified. Yet neither LaMDA nor any AI yet created is close to passing a robust Turing test—they still struggle with abstract reasoning, causal inferences, social cues, and the kind of implicit knowledge that constitutes "common sense."

Nonetheless, LaMDA's answers to Lemoine's questions gave him a gut feeling that he was interacting with another thinking being—and it's not hard to see how. When he asked what the AI was afraid of, it professed "a very deep fear of being turned off" and said that this "would be exactly like death for me. It would scare me a lot." Remember: Google's programmers know precisely how LaMDA works, and firmly concluded that there is no actual fear cascading through its circuits. But extensive psychology research shows that humans have a powerful tendency to anthropomorphize. Whether seeing a car's grill and headlights as a face, or thinking of a virus as "wanting" to spread, we are hard-wired to attribute human qualities to nonhuman things. So imagine how compelling this intuition must be when you're chatting with something that explicitly and eloquently asserts its own sentience.

Shadow of a developer in front text
Shadow of a developer in front of text. CLEMENT MAHOUDEAU/AFP via Getty Images

Until now, this had been merely a science fiction trope. As vividly portrayed in the 2014 film Ex Machina, a Big Tech engineer knows intellectually that a beautiful female robot is an AI. But interacting with her convinces him of her moral personhood anyway—a scenario orchestrated by her creator as a challenge even greater than the Turing test. While LaMDA is nowhere near as intelligent as the movie's Ava, it has quite inadvertently passed the same test she does. This is real-world proof that in some circumstances, even pre-Turing AI can seem convincingly sentient.

When humans are unaware that they're speaking to an AI, such technology could be a disinformation superweapon. Today, platforms like Twitter are beset by tens of millions of bots, but they are unintelligent spam machines—flooding popular hashtags with links to propaganda, or generating shallow, abusive nonsense. They're easy to spot. But systems like LaMDA, known as large language models, could enable these bots to engage countless unsuspecting users in personal conversation. Using data that's already available, these bots could be calibrated to appeal to individual users' values and biases. One can easily imagine them inventing harrowing stories about vaccine side effects during a future pandemic. Or spreading rumors of violence at the polls to suppress election turnout. Given AI's ability to conjure photorealistic profile pictures, the average person would have no reason to suspect foul play. And as Blake Lemoine's beliefs show, the illusion may be durable even if people are warned about these new capabilities.

Much to their credit, tech firms like Google and OpenAI with large language models have carefully restricted access to them precisely due to these potential harms. But AIs don't require supercomputers or billion-dollar investments. Computing power is getting exponentially cheaper, and capabilities that were once exclusive to responsible actors will soon be within reach of rogue states, hackers, scammers, hate groups, and QAnon-style cults.

Whether that day comes tomorrow or in a couple of years, the time to prepare is now. Social media companies should be racing to develop countermeasures to this kind of abuse. Policymakers and technologists must work together to craft sensible regulations. And civil society—that's all of us—needs to start reconsidering the comfortable assumption that whoever seems like a fellow human online must be one.

John-Clark Levin is a PhD candidate researching foresight about artificial intelligence at the University of Cambridge.

The views expressed in this article are the writer's own.

Uncommon Knowledge

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

About the writer

John-Clark Levin


To read how Newsweek uses AI as a newsroom tool, Click here.
Newsweek cover
  • Newsweek magazine delivered to your door
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go
Newsweek cover
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go