Skip to main content

The best way to teach AI to communicate politely? Introduce it to Reddit, of course

reddit openai alien
Image used with permission by copyright holder
Remember the horror movie The Fly, in which a brilliant scientist comes up with an groundbreaking technology — only for his good intentions to head south when he accidentally gets genetically spliced with a housefly, turning him into a hideous monster?

That’s kind of the vibe we get hearing about a project that aims to teach an AI to hold a polite conversation by feeding it message threads from Reddit.

Nonetheless, that’s the working theory held by researchers at OpenAI, the nonprofit Artificial Intelligence company backed by Silicon Valley heavy-hitters like Elon Musk, which lists its mission as “to build safe AI, and ensure AI’s benefits are as widely and evenly distributed as possible.”

All kidding aside, the impressive-sounding project focuses on creating a deep learning neural network able to build up a “probabilistic understanding” of conversation, and maybe even construct enough of a language model to be able to communicate itself. To help with this, OpenAI is employing a DGX-1 supercomputer developed by Nvidia.

According to a report, computations that would take around 250 hours on a regular computer take just 10 hours on the DGX-1. Although only limited details have been released about OpenAI’s Reddit project, this could mean the ability to crunch masses of data as part of the computer training process, and potentially built an absolutely enormous neural network (essentially a software approximation of the way the human brain works) with an enviable number of parameters.

As we can see from the number of companies working to build conversational agents right now, OpenAI isn’t alone in their mission to create a smart AI capable of holding a conversation. Still, the idea of training it with Reddit message threads is something we haven’t come across before — although it sounds very intriguing.

Let’s hope it’s trained it’s trained more on respectable subreddits like /r/Science and /r/TodayILearned than /r/WatchPeopleDie and /r/SpaceDicks, though. Because that could make all the difference.

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
OpenAI’s new tool can spot fake AI images, but there’s a catch
OpenAI Dall-E 3 alpha test version image.

Images generated by artificial intelligence (AI) have been causing plenty of consternation in recent months, with people understandably worried that they could be used to spread misinformation and deceive the public. Now, ChatGPT maker OpenAI is apparently working on a tool that can detect AI-generated images with 99% accuracy.

According to Bloomberg, OpenAI’s tool is designed to root out user-made pictures created by its own Dall-E 3 image generator. Speaking at the Wall Street Journal’s Tech Live event, Mira Murati, chief technology officer at OpenAI, claimed the tool is “99% reliable.” While the tech is being tested internally, there’s no release date yet.

Read more
This powerful ChatGPT feature is back from the dead — with a few key changes
A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.

ChatGPT has just regained the ability to browse the internet to help you find information. That should (hopefully) help you get more accurate, up-to-date data right when you need it, rather than solely relying on the artificial intelligence (AI) chatbot’s rather outdated training data.

As well as giving straight-up answers to your questions based on info found online, ChatGPT developer OpenAI revealed that the tool will provide a link to its sources so you can check the facts yourself. If it turns out that ChatGPT was wrong or misleading, well, that’s just another one for the chatbot’s long list of missteps.

Read more
Most people distrust AI and want regulation, says new survey
A person's hand holding a smartphone. The smartphone is showing the website for the ChatGPT generative AI.

Most American adults do not trust artificial intelligence (AI) tools like ChatGPT and worry about their potential misuse, a new survey has found. It suggests that the frequent scandals surrounding AI-created malware and disinformation are taking their toll and that the public might be increasingly receptive to ideas of AI regulation.

The survey from the MITRE Corporation and the Harris Poll claims that just 39% of 2,063 U.S. adults polled believe that today’s AI tech is “safe and secure,” a drop of 9% from when the two firms conducted their last survey in November 2022.

Read more