Skip to main content

Computer AI passes the Turing test for the first time

computer ai passes turing test first time the
Image used with permission by copyright holder

If you needed more evidence of the rise of the machines, there’s news from London this weekend where a computer has successfully passed the Turing test for the first time. Under the rules of the test, named after brilliant British mathematician Alan Turing, a computer program must convince more than 30 percent of those interacting with it that it is human.

The rule has long been held as a barometer of progress in the field of Artificial Intelligence and the fact that a contestant at the Royal Society of London has succeeded is significant. While other similar claims have been made in the past, one of the organizers said that “this event involved the most simultaneous comparison tests than ever before, was independently verified and, crucially, the conversations were unrestricted.”

Named Eugene Goostman and developed in Russia, the winning program impersonated a 13 year-old Ukrainian boy, doing enough via an instant message routine to convince 33 percent of the judges that it was indeed human. This meets the criteria set down by Turing in 1950: “[In the future] it will be possible to program computers… to make them play the imitation game so well that an average interrogator will not have more than 70 percent chance of making the right identification after five minutes of questioning.”

In development since 2001, Eugene is hosted online and can be questioned by anyone over the Web. “Our main idea was that he can claim that he knows anything, but his age also makes it perfectly reasonable that he doesn’t know everything,” said Vladimir Veselov, one of the programmers behind Eugene “We spent a lot of time developing a character with a believable personality.”

Whether or not the AI responds with the right answer isn’t under examination in the Turing test — it only focuses on the ‘humanness’ of the responses — so Eugene is unlikely to be taking over the world any time soon. However, some observers added a note of caution: “In the field of Artificial Intelligence there is no more iconic and controversial milestone than the Turing Test, when a computer convinces a sufficient number of interrogators into believing that it is not a machine but rather is a human,” Coventry University’s Kevin Warwick told the Telegraph. “Having a computer that can trick a human into thinking that someone, or even something, is a person we trust is a wake-up call to cybercrime.”

“It is important to understand more fully how online, real-time communication of this type can influence an individual human in such a way that they are fooled into believing something is true… when in fact it is not,” Warwick added. Coincidentally, the landmark breakthrough came on the 60th anniversary of Alan Turing’s death.

Editors' Recommendations

David Nield
Dave is a freelance journalist from Manchester in the north-west of England. He's been writing about technology since the…
Hackers are using AI to create vicious malware, says FBI
A hacker typing on an Apple MacBook laptop while holding a phone. Both devices show code on their screens.

The FBI has warned that hackers are running wild with generative artificial intelligence (AI) tools like ChatGPT, quickly creating malicious code and launching cybercrime sprees that would have taken far more effort in the past.

The FBI detailed its concerns on a call with journalists and explained that AI chatbots have fuelled all kinds of illicit activity, from scammers and fraudsters perfecting their techniques to terrorists consulting the tools on how to launch more damaging chemical attacks.

Read more
Even OpenAI has given up trying to detect ChatGPT plagiarism
A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.

OpenAI, the creator of the wildly popular artificial intelligence (AI) chatbot ChatGPT, has shut down the tool it developed to detect content created by AI rather than humans. The tool, dubbed AI Classifier, has been shuttered just six months after it was launched due to its “low rate of accuracy,” OpenAI said.

Since ChatGPT and rival services have skyrocketed in popularity, there has been a concerted pushback from various groups concerned about the consequences of unchecked AI usage. For one thing, educators have been particularly troubled by the potential for students to use ChatGPT to write their essays and assignments, then pass them off as their own.

Read more
Top authors demand payment from AI firms for using their work
Person typing on a MacBook.

More than 9,000 authors have signed an open letter to leading tech firms expressing concern over how they're using their copyrighted work to train AI-powered chatbots.

Sent by the Authors Guild to CEOs of OpenAI, Alphabet, Meta, Stability AI, IBM, and Microsoft, the letter calls attention to what it describes as “the inherent injustice in exploiting our works as part of your AI systems without our consent, credit, or compensation.”

Read more