Skip to main content

Author Ray Kurzweil develops new chatbot with Google based on one of his characters

google kurzweil chatbot 32554466 m
Voice assistants are here to stay. Over the years, they’ve become ever more efficient at divulging our intentions and interpreting our internet data profiles to better understand what we want. It also helps that they (usually) know the difference between similar-sounding words like dad and bad. Ray Kurzweil, an author whose subjects include health, artificial intelligence, and transhumanism among others, is working with Google (which has been among the most active proponents in this field), to create chatbots. These bots are said to be more advanced than the norm, enabling more “interesting conversations.”

It remains to be seen how these bots will speak and what function they will fulfill. But Kurzweil specified that at least one of these chatbots would be based off of Danielle, a character from one of his books. Supposedly, these bots “come to life” by being fed humongous volumes of text. Blogs could be one example. The result isn’t anywhere close to human intelligence yet, and while we could have interesting conversations with AI systems, Kurzweil says that we will have to wait until 2029 before we can have meaningful ones. The bots should be able to pass the Turing test by then.

Perhaps even more tantalizing is the fact that Danielle was created by feeding the bot all of her dialogue from Kurzweil’s book. This particular AI should be released later this year, with more to follow. Kurzweil also says that the same thing could be applicable to something like a person’s blog. Though there’s no word on a release date, it seems to mean that we will eventually be able to profile ourselves as chatbots. But though our typographical clone war may be coming, it’s probably in a galaxy far, far away.

Let’s imagine the possibilities here for a moment. You could start a conversation with your teenage self to come to terms with what a brat you were, or you could profile a friend and throw the bot into the open, hoping nobody misuses it. There’s a lot of risks involved with being able to somehow profile ourselves via the texts that we write, and it seems fitting the giants sat down to hold an AI ethics meeting at the beginning of this year.

Editors' Recommendations