Skip to main content

Author Ray Kurzweil develops new chatbot with Google based on one of his characters

Voice assistants are here to stay. Over the years, they’ve become ever more efficient at divulging our intentions and interpreting our internet data profiles to better understand what we want. It also helps that they (usually) know the difference between similar-sounding words like dad and bad. Ray Kurzweil, an author whose subjects include health, artificial intelligence, and transhumanism among others, is working with Google (which has been among the most active proponents in this field), to create chatbots. These bots are said to be more advanced than the norm, enabling more “interesting conversations.”

It remains to be seen how these bots will speak and what function they will fulfill. But Kurzweil specified that at least one of these chatbots would be based off of Danielle, a character from one of his books. Supposedly, these bots “come to life” by being fed humongous volumes of text. Blogs could be one example. The result isn’t anywhere close to human intelligence yet, and while we could have interesting conversations with AI systems, Kurzweil says that we will have to wait until 2029 before we can have meaningful ones. The bots should be able to pass the Turing test by then.

Recommended Videos

Perhaps even more tantalizing is the fact that Danielle was created by feeding the bot all of her dialogue from Kurzweil’s book. This particular AI should be released later this year, with more to follow. Kurzweil also says that the same thing could be applicable to something like a person’s blog. Though there’s no word on a release date, it seems to mean that we will eventually be able to profile ourselves as chatbots. But though our typographical clone war may be coming, it’s probably in a galaxy far, far away.

Let’s imagine the possibilities here for a moment. You could start a conversation with your teenage self to come to terms with what a brat you were, or you could profile a friend and throw the bot into the open, hoping nobody misuses it. There’s a lot of risks involved with being able to somehow profile ourselves via the texts that we write, and it seems fitting the giants sat down to hold an AI ethics meeting at the beginning of this year.

Dan Isacsson
Being a gamer since the age of three, Dan took an interest in mobile gaming back in 2009. Since then he's been digging ever…
Google Bard could soon become your new AI life coach
Google Bard on a green and black background.

Generative artificial intelligence (AI) tools like ChatGPT have gotten a bad rep recently, but Google is apparently trying to serve up something more positive with its next project: an AI that can offer helpful life advice to people going through tough times.

If a fresh report from The New York Times is to be believed, Google has been testing its AI tech with at least 21 different assignments, including “life advice, ideas, planning instructions and tutoring tips.” The work spans both professional and personal scenarios that users might encounter.

Read more
Google tells workers to be wary of AI chatbots
ChatGPT versus Google on smartphones.

Alphabet has told its employees not to enter confidential information into Bard, the generative AI chatbot created and operated by Google, which Alphabet owns.

The company’s warning also extends to other chatbots, such as Microsoft-backed ChatGPT from OpenAI, Reuters reported on Thursday.

Read more
Google’s ChatGPT rival is an ethical mess, say Google’s own workers
ChatGPT versus Google on smartphones.

Google launched Bard, its ChatGPT rival, despite internal concerns that it was a “pathological liar” and produced “cringeworthy” results, a new report has claimed. Worker say these worries were apparently ignored in a frantic attempt to catch up with ChatGPT and head off the threat it could pose to Google’s search business.

The revelations come from a Bloomberg report that took a deep dive into Google Bard and the issues raised by employees who have worked on the project. It’s an eye-opening account of the ways the chatbot has apparently gone off the rails and the misgivings these incidents have raised among concerned workers.

Read more