Skip to main content

ChatGPT may soon moderate illegal content on sites like Facebook

GPT-4 — the large language model (LLM) that powers ChatGPT Plus — may soon take on a new role as an online moderator, policing forums and social networks for nefarious content that shouldn’t see the light of day. That’s according to a new blog post from ChatGPT developer OpenAI, which says this could offer “a more positive vision of the future of digital platforms.”

By enlisting artificial intelligence (AI) instead of human moderators, OpenAI says GPT-4 can enact “much faster iteration on policy changes, reducing the cycle from months to hours.” As well as that, “GPT-4 is also able to interpret rules and nuances in long content policy documentation and adapt instantly to policy updates, resulting in more consistent labeling,” OpenAI claims.

A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.
Rolf van Root / Unsplash

For example, the blog post explains that moderation teams could assign labels to content to explain whether it falls within or outside a given platform’s rules. GPT-4 could then take the same data set and assign its own labels, without knowing the answers beforehand.

The moderators could then compare the two sets of labels and use any discrepancies to reduce confusion and add clarification to their rules. In other words, GPT-4 could act as an everyday user and gauge whether the rules make sense.

The human toll

OpenAI's GPT-4 large language model attempts to moderate a piece of content. The result is compared to a human's analysis of the content.
OpenAI

Right now, content moderation on various websites is performed by humans, which exposes them to potentially illegal, violent, or otherwise harmful content on a regular basis. We’ve repeatedly seen the awful toll that content moderation can take on people, with Facebook paying $52 million to moderators who suffered from PTSD due to the traumas of their job.

Reducing the burden on human moderators could help to improve their working conditions, and since AIs like GPT-4 are immune to the kind of mental stress that humans feel when handling troublesome content, they could be deployed without worrying about burnout and PTSD.

However, it does raise the question of whether using AI in this manner would result in job losses. Content moderation is not always a fun job, but it is a job nonetheless, and if GPT-4 takes over from humans in this area, there will likely be concern that former content moderators will simply be made redundant rather than reassigned to other roles.

OpenAI does not mention this possibility in its blog post, and that really is something for content platforms to decide on. But it might not do much to allay fears that AI will be deployed by large companies simply as a cost-saving measure, with little concern for the aftermath.

Still, if AI can reduce or eliminate the mental devastation faced by the overworked and underappreciated teams who moderate content on the websites used by billions of people every day, there could be some good in all this. It remains to be seen whether that will be tempered by equally devastating redundancies.

Editors' Recommendations

Alex Blake
In ancient times, people like Alex would have been shunned for their nerdy ways and strange opinions on cheese. Today, he…
ChatGPT is violating your privacy, says major GDPR complaint
ChatGPT app running on an iPhone.

Ever since the first generative artificial intelligence (AI) tools exploded onto the tech scene, there have been questions over where they’re getting their data and whether they’re harvesting your private data to train their products. Now, ChatGPT maker OpenAI could be in hot water for exactly these reasons.

According to TechCrunch, a complaint has been filed with the Polish Office for Personal Data Protection alleging that ChatGPT violates a large number of rules found in the European Union’s General Data Protection Regulation (GDPR). It suggests that OpenAI’s tool has been scooping up user data in all sorts of questionable ways.

Read more
Google Bard could soon become your new AI life coach
Google Bard on a green and black background.

Generative artificial intelligence (AI) tools like ChatGPT have gotten a bad rep recently, but Google is apparently trying to serve up something more positive with its next project: an AI that can offer helpful life advice to people going through tough times.

If a fresh report from The New York Times is to be believed, Google has been testing its AI tech with at least 21 different assignments, including “life advice, ideas, planning instructions and tutoring tips.” The work spans both professional and personal scenarios that users might encounter.

Read more
GPT-4.5 news: Everything we know so far about the next-generation language model
ChatGPT app running on an iPhone.

OpenAI's GPT-4 language model is considered by most to be the most advanced language model used to power modern artificial intelligences (AI). It's used in the ChatGPT chatbot to great effect, and other AIs in similar ways. But that's not the end of its development. As with GPT-3.5, a GPT-4.5 language model may well launch before we see a true next-generation GPT-5.

Here's everything we know about GPT-4.5 so far.

Read more