Skip to main content

OpenAI showing a ‘very dangerous mentality’ regarding safety, expert warns

The ChatGPT name next to an OpenAI logo on a black and white background.
Pexels

An AI expert has accused OpenAI of rewriting its history and being overly dismissive of safety concerns.

Former OpenAI policy researcher Miles Brundage criticized the company’s recent safety and alignment document published this week. The document describes OpenAI as striving for artificial general intelligence (AGI) in many small steps, rather than making “one giant leap,” saying that the process of iterative deployment will allow it to catch safety issues and examine the potential for misuse of AI at each stage.

Recommended Videos

Among the many criticisms of AI technology like ChatGPT, experts are concerned that chatbots will give inaccurate information regarding health and safety (like the infamous issue with Google’s AI search feature which instructed people to eat rocks) and that they could be used for political manipulation, misinformation, and scams. OpenAI in particular has attracted criticism for lack of transparency in how it develops its AI models, which can contain sensitive personal data.

The release of the OpenAI document this week seems to be a response to these concerns, and the document implies that the development of the previous GPT-2 model was “discontinuous” and that it was not initially released due to “concerns about malicious applications⁠,” but now the company will be moving toward a principle of iterative development instead. But Brundage contends that the document is altering the narrative and is not an accurate depiction of the history of AI development at OpenAI.

“OpenAI’s release of GPT-2, which I was involved in, was 100% consistent + foreshadowed OpenAI’s current philosophy of iterative deployment,” Brundage wrote on X. “The model was released incrementally, with lessons shared at each step. Many security experts at the time thanked us for this caution.”

Brundage also criticized the company’s apparent approach to risk based on this document, writing that,  “It feels as if there is a burden of proof being set up in this section where concerns are alarmist + you need overwhelming evidence of imminent dangers to act on them – otherwise, just keep shipping. That is a very dangerous mentality for advanced AI systems.”

This comes at a time when OpenAI is under increasing scrutiny with accusations that it prioritizes “shiny products” over safety.

Georgina Torbet
Georgina has been the space writer at Digital Trends space writer for six years, covering human space exploration, planetary…
DeepSeek invites users behind the curtain of its open source AI code
Phone running Deepseek on a laptop keyboard.

The Chinese startup, DeepSeek plans to become even more transparent about the technology behind its open-source AI models, such as its R1 reasoning model.

The company detailed in a post on X on Friday that it will make several code repositories available to the public, starting next week. This will give developers and researchers a deeper understanding of the nuances of the key parts of DeepSeek’s code. It is an especially bold move for a tech company. However, bold moves are already par for the course for DeepSeek, which entered the AI space as an industry disrupter. It has especially stood out because its models have performed as well, if not better than many of the top AI brands in the industry, such as OpenAI and Meta– that use proprietary technologies.

Read more
OpenAI’s Operator agent is coming to eight more countries
ChatGPT and OpenAI logos.

Following  its U.S. debut in January, OpenAI's Operator AI agent will soon be expanding to eight new nations, the company announced on Friday.

"Operator is now rolling out to Pro users in Australia, Brazil, Canada, India, Japan, Singapore, South Korea, the UK, and most places ChatGPT is available," the OpenAI team wrote in a post to X. The company is "still working on making Operator available in the EU, Switzerland, Norway, Liechtenstein & Iceland," but has not clarified release timing for those additional countries. As with the American release, users in the expanded nation list will still have to pay for OpenAI's $200 per month Pro tier subscription in order to access the AI agent.

Read more
With 400 million users, OpenAI maintains lead in competitive AI landscape
OpenAI's new typeface OpenAI Sans

Competition in the AI industry remains tough, and OpenAI has proven that it is not taking any coming challenges lightly. The generative AI brand announced Thursday that it services 400 million weekly active users as of February, a 33% increase in less than three months.

OpenAI chief operating officer, Brad Lightcap confirmed the latest user statistics to CNBC, indicating that the figures had not been previously reported. The numbers have quickly risen from previously confirmed stats of 300 million weekly users in December.

Read more