If you’ve ever been bored on the Internet (which, of course, you have), then you may have stumbled into a chat with Cleverbot, one of the various artificial intelligence entities that exist online. Since all the responses from Cleverbot are generated by a computer program, the conversations often take unexpected turns. But what would happen if two of these so-called chatbots talked amongst themselves?
This brilliant question was put the the test at Cornell University’s Creative Machines Lab, which focuses on the study of “autonomous systems that can design and create other machines — automatically,” according to the description on the lab’s website. To accomplish this, researchers at Creative Machines put two chatbots together, and the result is nothing short of amazing.
The project serves as Creative Machine’s submission to the 2011 Loebner Prize Competition in Artificial Intelligence, which takes place this year on October 19 at the University of Exeter in the UK. At stake is a $100,000 grand prize, which goes to the team whose AI has an authentic look and communications capacity to that of a real human. The prize Creative Machine is up for is the Silver Metal prize, which comes with $25,000 in winnings, and is awarded to the team whose AI can hold the most human-like conversation.
Last year’s Loebner Prize went to AI programmer Bruce Wilcox, who created a chatbot named Suzette.
So, how human-like is the Cornell team’s duo? That depends on what your definition of “human-like” is. If talking over each other, misunderstanding, being mildly offensive and talking about God are all on your list of qualifications, then these chatbots may be winners. Of course, the actual product is slightly less human-like, and much more two-chatbots-talking-to-each-other-like. But the AIs still manage to keep the conversation interesting, which is more than we can say for some people.
- How will we know when an AI actually becomes sentient?
- A.I. headphones could warn distracted pedestrians when there’s traffic around
- Mind-reading A.I. analyzes your brain waves to guess what video you’re watching
- This ad blocker for audio is a peek into the future of ‘perceptual ad blockers’
- What’s that liquid? IBM’s flavor-identifying ‘e-tongue’ will tell you