Skip to main content

Digital Trends may earn a commission when you buy through links on our site. Why trust us?

AI researchers warn of ‘human extinction’ threat without further oversight

More than a dozen current and former employees from OpenAI, Google’s Deep Mind, and Anthropic have posted an open letter on Tuesday calling attention to the “serious risks” posed by continuing to rapidly develop the technology without having an effective oversight framework in place.

The group of researchers argue that the technology could be misused to exacerbate existing inequalities, manipulate information and spread disinformation, and even “the loss of control of autonomous AI systems potentially resulting in human extinction.”

The signatories believe that these risks can be “adequately mitigated” through the combined efforts of the scientific community, legislators, and the public, but worry that “AI companies have strong financial incentives to avoid effective oversight” and cannot be counted upon to impartially steward the technology’s development.

Since the release of ChatGPT in November 2022, generative AI technology has taken the computing world by storm with hyperscalers like Google Cloud, Amazon AWS, Oracle, and Microsoft Azure leading what is expected to be a trillion-dollar industry by 2032. A recent study by McKinsey found that, as of March 2024, nearly 75% of organizations surveyed had adopted AI in at least one capacity. Meanwhile, in its annual Work Index survey, Microsoft found that 75% of office workers already use AI at work.

However, as Daniel Kokotajlo, a former employee at OpenAI, told The Washington Post, “They and others have bought into the ‘move fast and break things’ approach, and that is the opposite of what is needed for technology this powerful and this poorly understood.” AI startups including OpenAI and Stable Diffusion have repeatedly run afoul of U.S. copyright laws, for example, while publicly available chatbots are routinely goaded into repeating hate speech and conspiracy theories as well as spread misinformation.

The objecting AI employees argue that these companies possess “substantial non-public information” about their products capabilities and limitations, including the models’ potential risk of causing harm and how effective their protective guardrails actually are. They point out that only some of this information is available to government agencies through “weak obligations to share and none of which is available to the general public.”

“So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public,” the group stated, arguing that the industry’s broad use of confidentiality agreements and weak implementation of existing whistleblower protections are hampering those issues.

The group called on AI companies to stop entering into and enforcing non-disparagement agreements, establish an anonymous process for employees to address their concerns with the company’s board of directors and government regulators, and to not retaliate against public whistleblowers should those internal processes prove insufficient.

Editors' Recommendations

Andrew Tarantola
Andrew has spent more than a decade reporting on emerging technologies ranging from robotics and machine learning to space…
OpenAI’s new tool can spot fake AI images, but there’s a catch
OpenAI Dall-E 3 alpha test version image.

Images generated by artificial intelligence (AI) have been causing plenty of consternation in recent months, with people understandably worried that they could be used to spread misinformation and deceive the public. Now, ChatGPT maker OpenAI is apparently working on a tool that can detect AI-generated images with 99% accuracy.

According to Bloomberg, OpenAI’s tool is designed to root out user-made pictures created by its own Dall-E 3 image generator. Speaking at the Wall Street Journal’s Tech Live event, Mira Murati, chief technology officer at OpenAI, claimed the tool is “99% reliable.” While the tech is being tested internally, there’s no release date yet.

Read more
Even OpenAI has given up trying to detect ChatGPT plagiarism
A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.

OpenAI, the creator of the wildly popular artificial intelligence (AI) chatbot ChatGPT, has shut down the tool it developed to detect content created by AI rather than humans. The tool, dubbed AI Classifier, has been shuttered just six months after it was launched due to its “low rate of accuracy,” OpenAI said.

Since ChatGPT and rival services have skyrocketed in popularity, there has been a concerted pushback from various groups concerned about the consequences of unchecked AI usage. For one thing, educators have been particularly troubled by the potential for students to use ChatGPT to write their essays and assignments, then pass them off as their own.

Read more
ChatGPT maker OpenAI faces FTC probe over consumer protection laws
ChatGPT and OpenAI logos.

ChatGPT maker OpenAI is facing an investigation by the Federal Trade Commission (FTC) over possible violations of consumer protection laws.

The action marks the toughest scrutiny of Microsoft-backed OpenAI since it burst onto the scene in November with its AI-powered ChatGPT chatbot.

Read more