Skip to main content

Digital Trends may earn a commission when you buy through links on our site. Why trust us?

AI researchers warn of ‘human extinction’ threat without further oversight

More than a dozen current and former employees from OpenAI, Google’s Deep Mind, and Anthropic have posted an open letter on Tuesday calling attention to the “serious risks” posed by continuing to rapidly develop the technology without having an effective oversight framework in place.

The group of researchers argue that the technology could be misused to exacerbate existing inequalities, manipulate information and spread disinformation, and even “the loss of control of autonomous AI systems potentially resulting in human extinction.”

Recommended Videos

The signatories believe that these risks can be “adequately mitigated” through the combined efforts of the scientific community, legislators, and the public, but worry that “AI companies have strong financial incentives to avoid effective oversight” and cannot be counted upon to impartially steward the technology’s development.

Since the release of ChatGPT in November 2022, generative AI technology has taken the computing world by storm with hyperscalers like Google Cloud, Amazon AWS, Oracle, and Microsoft Azure leading what is expected to be a trillion-dollar industry by 2032. A recent study by McKinsey found that, as of March 2024, nearly 75% of organizations surveyed had adopted AI in at least one capacity. Meanwhile, in its annual Work Index survey, Microsoft found that 75% of office workers already use AI at work.

However, as Daniel Kokotajlo, a former employee at OpenAI, told The Washington Post, “They and others have bought into the ‘move fast and break things’ approach, and that is the opposite of what is needed for technology this powerful and this poorly understood.” AI startups including OpenAI and Stable Diffusion have repeatedly run afoul of U.S. copyright laws, for example, while publicly available chatbots are routinely goaded into repeating hate speech and conspiracy theories as well as spread misinformation.

The objecting AI employees argue that these companies possess “substantial non-public information” about their products capabilities and limitations, including the models’ potential risk of causing harm and how effective their protective guardrails actually are. They point out that only some of this information is available to government agencies through “weak obligations to share and none of which is available to the general public.”

“So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public,” the group stated, arguing that the industry’s broad use of confidentiality agreements and weak implementation of existing whistleblower protections are hampering those issues.

The group called on AI companies to stop entering into and enforcing non-disparagement agreements, establish an anonymous process for employees to address their concerns with the company’s board of directors and government regulators, and to not retaliate against public whistleblowers should those internal processes prove insufficient.

Andrew Tarantola
Former Computing Writer
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
Turns out, it’s not that hard to do what OpenAI does for less
OpenAI's new typeface OpenAI Sans

Even as OpenAI continues clinging to its assertion that the only path to AGI lies through massive financial and energy expenditures, independent researchers are leveraging open-source technologies to match the performance of its most powerful models -- and do so at a fraction of the price.

Last Friday, a unified team from Stanford University and the University of Washington announced that they had trained a math and coding-focused large language model that performs as well as OpenAI's o1 and DeepSeek's R1 reasoning models. It cost just $50 in cloud compute credits to build. The team reportedly used an off-the-shelf base model, then distilled Google's Gemini 2.0 Flash Thinking Experimental model into it. The process of distilling AIs involves pulling the relevant information to complete a specific task from a larger AI model and transferring it to a smaller one.

Read more
OpenAI’s rebrand is meant to make the company appear ‘more human’
OpenAI's new typeface OpenAI Sans

OpenAI has unveiled a rebrand that brings changes to its logo, typeface, and color palette. It is the company’s first rebrand since it became notable in 2022 with the popularity of its ChatGPT chatbot. 

OpenAI, Head of Design Veit Moeller, and Design Director Shannon Jager spoke with Wallpaper about the rebrand changes noting that the company aimed to create a “more organic and more human” image visual identity. This included collaborating with outside partners to develop a new typeface, OpenAI Sans that is unique to the brand. It is a look that “blends geometric precision and functionality with a rounded, approachable character,” OpenAI said in its mission statement.

Read more
OpenAI CEO Sam Altman admits the heyday of ChatGPT is over
Sam Altman describing the o3 model's capabilities

OpenAI CEO Sam Altman has conceded that the company has lost its edge within the AI space amid the introduction of Chinese firm, DeepSeek and its R1 reasoning model. However, he says the brand will continue to develop in the industry. 

The company head admitted OpenAI has been "on the wrong side of history" in terms of open-source development for its AI models. Altman and several other OpenAI executives discussed the state of the company and its future plans during an Ask Me Anything session on Reddit on Friday, where the team got candid with curious enthusiasts about a range of topics. 

Read more