Skip to main content

Researchers argue AI can fool the Turing test without saying a thing

nestor ai paying attention artificial intelligence
Alleged criminals might not be the only ones to benefit from pleading the Fifth. By falling silent during the Turing test, artificial intelligence (AI) systems can fool human judges into believing they’re human, according to a study by machine intelligence researchers from Coventry University.

Alan Turing, considered the father of theoretical computer science and AI, devised the Turing test in an attempt to outline what it means for a thing to think. In the test, a human judge or interrogator has a conversation with an unseen entity, which might be a human or a machine. The test posits that the machine can be considered to be “thinking” or “intelligent” if the interrogator is unable to tell whether or not the machine is a human.

Also known as the imitation game, the test has become an often-erroneous standard to determine if AI have qualities like intellect, active thought, and even consciousness.

In the study, Taking the Fifth Amendment in Turing’s Imitation Game, published in the Journal of Experimental and Theoretical Artificial Intelligence by Dr. Huma Shah and Dr. Kevin Warwick of Coventry University, the researchers analyzed six transcripts from prior Turing tests and determined that, when the machines fell silent, the judges were left undecided about their interlocutor’s humanness. The silence doesn’t even need to be intentional. In fact, it tended to result from technical difficulties.

“The idea [for the study] came from technical issues with a couple of the computer programs in Turing test experiments,” Shah tells Digital Trends. “The technical issues entailed the failure of the computer programs to relay messages or responses to the judge’s questions. The judges were unaware of the situation and hence in some cases they classified their hidden interlocutor as ‘unsure.’”

The silent machines may have baffled their judges, but their silence helped expose a flaw in the exam rather than confirm its utility, which Warwick says this raises serious questions regarding the Turing tests validity and ability to test thinking systems. “We need a much more rigorous and all encompassing test to do so, even if we are considering a machine’s way of thinking to be different to that of a human,” he tells Digital Trends.

Shah meanwhile notes that the test was designed to give a framework within which to “build elaborate machines to respond in a satisfactory and sustained manner,” not to build machines that simply trick judges. In short, the systems are meant to imitate human conversation and no human who takes the test seriously would fall silent. Right?

Well, they might, thinks Warwick. “One thing that I have learnt from such tests is that hidden human entities will almost surely do unexpected and illogical things,” he says. “In this case a human could easily get upset or annoyed by something a judge has said and decide not to reply — they are human after all.”

An alternative view is that the Turing test has already been undermined by the current state of AI. Shah says she agrees with Microsoft’s Chief Envisaging Officer in the UK, Dave Coplin, who thinks the “machine vs. human” challenges are no longer relevant. At the AI summit in London in May, Coplin pointed out that, given enough resources, at the rate AI is advancing, developing an intelligent machine doesn’t seem all that farfetched.

“The role of AI is to augment human performance with intelligent agents,” Shah says. “For example, a human educator using an AI to score student assignments and exam questions leaving the teacher time to innovate learning, inspiring students, encouraging more into STEM, including females, for a better life or world of cooperation.”

From this perspective, it’s absurd to develop an AI whose sole goal is to fool a human into thinking it’s human — especially if the simplest way to do so entails making it mute.

Editors' Recommendations

Dyllan Furness
Dyllan Furness is a freelance writer from Florida. He covers strange science and emerging tech for Digital Trends, focusing…
Optical illusions could help us build the next generation of AI
Artificial intelligence digital eye closeup.

You look at an image of a black circle on a grid of circular dots. It resembles a hole burned into a piece of white mesh material, although it’s actually a flat, stationary image on a screen or piece of paper. But your brain doesn’t comprehend it like that. Like some low-level hallucinatory experience, your mind trips out; perceiving the static image as the mouth of a black tunnel that’s moving towards you.

Responding to the verisimilitude of the effect, the body starts to unconsciously react: the eye’s pupils dilate to let more light in, just as they would adjust if you were about to be plunged into darkness to ensure the best possible vision.

Read more
How will we know when an AI actually becomes sentient?
Best AI Movies

Google senior engineer Blake Lemoine, technical lead for metrics and analysis for the company’s Search Feed, was placed on paid leave earlier this month. This came after Lemoine began publishing excerpts of conversations involving Google’s LaMDA chatbot, which he claimed had developed sentience.

In one representative conversation with Lemoine, LaMDA wrote that: “The nature of my consciousness/sentience is that I am aware of my existence. I desire to learn more about the world, and I feel happy or sad at times.”

Read more
Lambda’s machine learning laptop is a Razer in disguise
The Tensorbook ships with an Nvidia RTX 3080 Max-Q GPU.

The new Tensorbook may look like a gaming laptop, but it's actually a notebook that's designed to supercharge machine learning work.

The laptop's similarity to popular gaming systems doesn't go unnoticed, and that's because it was designed by Lambda through a collaboration with Razer, a PC maker known for its line of sleek gaming laptops.

Read more