Thinking Through the Turing Test

bombealanturing
A British Turing Bombe machine is seen functioning in Bletchley Park Museum in Bletchley, central England, on September 6, 2006. Alessia Pierdomenico/Reuters

In a study conducted over the weekend, a computer program was able to fool more than a third of human interviewers into believing it was "Eugene Goostman," a 13-year-old Ukrainian boy for whom English was a second language. This feat, known as the Turing Test, has never before been accomplished. When the news broke earlier this week, it was touted as a milestone. For example, The Independent wrote, "Super-computer becomes first to convince us it's human."

In the days that followed, however, articles began questioning that claim. They pointed out, for instance, that it was something of a cop-out to have the chat-bot imitating a 13-year-old Ukrainian ESL student. (Who could blame him for responses like "I can't disclosure my thoughts!"?) And a quick chat with Goostman betrays the bot's limitations. When Wired asked where Goostman was from, the bot replied, "A big Ukrainian city called Odessa." When Wired followed up by asking if Goostman had ever been to the Ukraine, it replied, "I've never been there." The magazine concluded, not wrongly, that "the results felt something like an AIM chatbot circa 1999."

So what does it actually mean for Goostman to have passed the Turing test?

The test was proposed in 1950 by the logician and computer scientist Alan Turing, who was integral in breaking the German's Enigma code during World War II. On the subject of artificial intelligence, Turing published a paper exploring the question "Can Machines Think?" The paper suggests that one way of evaluating this would be if a machine can convince a human interviewer that it is human. Turing speculated that in 50 years (the year 2000) a machine will exist that can do this 30 percent of the time.

But just because the English-impaired bot Eugene Goostman convinced 33 percent of judges at the Royal Society in London that it was human, does that mean it can think?

First, it's worth pointing out that Turing's paper didn't claim that if a machine can convince one-third of humans that it is a human being, it is capable of human thought. That's because Turing didn't find the notion of "thinking" all that useful for machines. When it comes to machines, he wrote, thinking, as humans understand it, is "too meaningless to deserve discussion."

Why is that? The Turing Test, explains Mark Goldfeder, a senior fellow at the Center for the Study of Law and Religion at Emory University, suffers from shortcomings elucidated by a thought experiment known as the Chinese Room Argument. In the experiment, first published in 1980 by American philosopher John Searle, the philosopher imagines himself trapped in a room. Chinese characters, meanwhile are being slipped under the door. Searle doesn't speak Chinese, but he is armed with a computer program that can manipulate the Chinese characters. In this way, by using this computer, he is able to coherently organize the characters in such a way as to convince the person on the other side of the door that he knows Chinese. As the Stanford Encyclopedia of Philosophy sums it up, "Since a computer just does what the human does—manipulate symbols on the basis of their syntax alone—no computer, merely by following a program, comes to genuinely understand Chinese."

Even if Eugene Goostman passed the Turing Test—convincing human beings with limited information that the bot is human—it still doesn't prove machines can think.

So what does it prove? In his paper, Turing concluded that the question "'Can machines think?...should be replaced by 'Are there imaginable digital computers which would do well in the imitation game?'" In other words, this measure of machine intellect is relative to its influence on the humans with which it interacts. A broader restatement of the question might be: How seriously should we take this robot?

According to Goldfeder, Goostman's ability to fool human beings indicates that it may be time to begin considering machines—chat-bots, robots, etc. —as entities endowed with personhood. "My main argument," he says, "is that people conflate personhood with humanity. They're different."

Goldfeder doesn't claim that a computer should be endowed with the moral rights of a human being, but he does believe that we need to take our interactions with them more seriously, especially in a legal sense. Pointing to increasingly autonomous drones, sophisticated chat-bots, robotic prison guards and self-driving cars, Goldfeder suggests that machines are occupying a new role in our lives.

Aware that everyone might not subscribe to his suggestion that robots should have the legal status of a person, Goldfeder offers some irrefutable red-white-and-blue logic: As machines increase in complexity, evolving through interactions with humans (as in the case of a chat-bot), "you're gonna want to have somebody to sue."

Uncommon Knowledge

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

About the writer


To read how Newsweek uses AI as a newsroom tool, Click here.
Newsweek cover
  • Newsweek magazine delivered to your door
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go
Newsweek cover
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go