Skip to main content

Tech leaders call for pause of GPT-4.5, GPT-5 development due to ‘large-scale risks’

Generative AI has been moving at an unbelievable speed in recent months, with the launch of various tools and bots such as OpenAI’s ChatGPT, Google Bard, and more. Yet this rapid development is causing serious concern among seasoned veterans in the AI field — so much so that over 1,000 of them have signed an open letter calling on AI developers to slam on the brakes.

The letter was published on the website of the Future of Life Institute, an organization whose stated mission is “steering transformative technology towards benefitting life and away from extreme large-scale risks.” Among the signatories are several prominent academics and leaders in tech, including Apple co-founder Steve Wozniak, Twitter CEO Elon Musk, and politician Andrew Yang.

The article calls for all companies working on AI models that are more powerful than the recently released GPT-4 to immediately halt work for at least six months. This moratorium should be “public and verifiable” and would allow time to “jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.”

The letter says this is necessary because “AI systems with human-competitive intelligence can pose profound risks to society and humanity.” Those risks include the spread of propaganda, the destruction of jobs, the potential replacement and obsolescence of human life, and the “loss of control of our civilization.” The authors add that the decision over whether to press ahead into this future should not be left to “unelected tech leaders.”

AI ‘for the clear benefit of all’

ChatGPT versus Google on smartphones.
DigitalTrends

The letter comes just after claims were made that GPT-5, the next version of the tech powering ChatGPT, could achieve artificial general intelligence. If correct, that means it would be able to understand and learn anything a human can comprehend. That could make it incredibly powerful in ways that haven’t yet been fully explored.

What’s more, the letter contends that responsible planning and management surrounding the development of AI systems is not happening, “even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control.”

Instead, the letter asserts that new governance systems must be created that will regulate AI development, help people distinguish AI-created and human-created content, hold AI labs like OpenAI responsible for any harm they cause, enable society to cope with AI disruption (especially to democracy), and more.

The authors end on a positive note, claiming that “humanity can enjoy a flourishing future with AI … in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt.” Hitting pause on AI systems more powerful than GPT-4 would allow this to happen, they state.

Will the letter have its intended effect? That’s hard to say. There are clearly incentives for OpenAI to continue working on advanced models, both financial and reputational. But with so many potential risks — and with very little understanding of them — the letter’s authors clearly feel those incentives are too dangerous to pursue.

Editors' Recommendations

Alex Blake
In ancient times, people like Alex would have been shunned for their nerdy ways and strange opinions on cheese. Today, he…
Is ChatGPT safe? Here are the risks to consider before using it
A response from ChatGPT on an Android phone.

For those who have seen ChatGPT in action, you know just how amazing this generative AI tool can be. And if you haven’t seen ChatGPT do its thing, prepare to have your mind blown! 

There’s no doubting the power and performance of OpenAI’s famous chatbot, but is ChatGPT actually safe to use? While tech leaders the world over are concerned over the evolutionary development of AI, these global concerns don’t necessarily translate to an individual user experience. With that being said, let’s take a closer look at ChatGPT to help you hone in on your comfort level.
Privacy and financial leaks
In at least one instance, chat history between users was mixed up. On March 20, 2023, ChatGPT creator OpenAI discovered a problem, and ChatGPT was down for several hours. Around that time, a few ChatGPT users saw the conversation history of other people instead of their own. Possibly more concerning was the news that payment-related information from ChatGPT-Plus subscribers might have leaked as well.

Read more
What is ChatGPT Plus? Here’s what to know before you subscribe
Close up of ChatGPT and OpenAI logo.

ChatGPT is completely free to use, but that doesn't mean OpenAI isn't also interested in making some money.

ChatGPT Plus is a subscription model that gives you access to a completely different service based on the GPT-4 model, along with faster speeds, more reliability, and first access to new features. Beyond that, it also opens up the ability to use ChatGPT plug-ins, create custom chatbots, use DALL-E 3 image generation, and much more.
What is ChatGPT Plus?
Like the standard version of ChatGPT, ChatGPT Plus is an AI chatbot, and it offers a highly accurate machine learning assistant that's able to carry out natural language "chats." This is the latest version of the chatbot that's currently available.

Read more
ChatGPT shortly devolved into an AI mess
A response from ChatGPT on an Android phone.

I've seen my fair share of unhinged AI responses -- not the least of which was when Bing Chat told me it wanted to be human last year -- but ChatGPT has stayed mostly sane since it was first introduced. That's changing, as users are flooding social media with unhinged, nonsensical responses coming from the chatbot.

In a lot of reports, ChatGPT simply spits out gibberish. For example, u/Bullroarer_Took took to the ChatGPT subreddit to showcase a response in which a series of jargon and proper sentence structure gives the appearance of a response, but a close read shows the AI spitting out nonsense.

Read more