If you needed more evidence of the rise of the machines, there’s news from London this weekend where a computer has successfully passed the Turing test for the first time. Under the rules of the test, named after brilliant British mathematician Alan Turing, a computer program must convince more than 30 percent of those interacting with it that it is human.
The rule has long been held as a barometer of progress in the field of Artificial Intelligence and the fact that a contestant at the Royal Society of London has succeeded is significant. While other similar claims have been made in the past, one of the organizers said that “this event involved the most simultaneous comparison tests than ever before, was independently verified and, crucially, the conversations were unrestricted.”
Named Eugene Goostman and developed in Russia, the winning program impersonated a 13 year-old Ukrainian boy, doing enough via an instant message routine to convince 33 percent of the judges that it was indeed human. This meets the criteria set down by Turing in 1950: “[In the future] it will be possible to program computers… to make them play the imitation game so well that an average interrogator will not have more than 70 percent chance of making the right identification after five minutes of questioning.”
In development since 2001, Eugene is hosted online and can be questioned by anyone over the Web. “Our main idea was that he can claim that he knows anything, but his age also makes it perfectly reasonable that he doesn’t know everything,” said Vladimir Veselov, one of the programmers behind Eugene “We spent a lot of time developing a character with a believable personality.”
Whether or not the AI responds with the right answer isn’t under examination in the Turing test — it only focuses on the ‘humanness’ of the responses — so Eugene is unlikely to be taking over the world any time soon. However, some observers added a note of caution: “In the field of Artificial Intelligence there is no more iconic and controversial milestone than the Turing Test, when a computer convinces a sufficient number of interrogators into believing that it is not a machine but rather is a human,” Coventry University’s Kevin Warwick told the Telegraph. “Having a computer that can trick a human into thinking that someone, or even something, is a person we trust is a wake-up call to cybercrime.”
“It is important to understand more fully how online, real-time communication of this type can influence an individual human in such a way that they are fooled into believing something is true… when in fact it is not,” Warwick added. Coincidentally, the landmark breakthrough came on the 60th anniversary of Alan Turing’s death.
- ‘Minimal Turing Test’ asks humans to prove they’re human with only one word
- Computers will soon outsmart us. Does that make an A.I. rebellion inevitable?
- Microsoft’s friendly new A.I wants to figure out what you want — before you ask
- Replaced by robots: 10 jobs that could be hit hard by the A.I. revolution
- A.I. can do almost anything now, but here are 6 things machines still suck at