Skip to main content

OpenAI drops nonprofit status in large-scale reorganization

The ChatGPT name next to an OpenAI logo on a black and white background.
Pexels

Reuters reports that, in an effort to make itself more attractive to investors, OpenAI plans to scrap the nonprofit structure of its core business, thereby removing the authority of its board of directors, as well as granting CEO Sam Altman equity in the company.

“We remain focused on building AI that benefits everyone, and we’re working with our board to ensure that we’re best positioned to succeed in our mission. The nonprofit is core to our mission and will continue to exist,” an OpenAI spokesperson told Reuters. The nonprofit portion of the business will not be done away with entirely, but instead would continue to exist and own a minority stake in the overall company.

Recommended Videos

Sam Altman could receive as much as $150 billion in equity from the restructured company. That’s quite the reversal of fortunes for Altman, who, just last November, had been fired from OpenAI by its board of directors.

Since Altman’s firing and subsequent rehiring, OpenAI has seen the departure of numerous high-level employees. Researchers Jan Leike and Ilya Sutskever both left in May, citing what they called the company’s disregard of safety guidelines in favor of building “shiny products.” Earlier this week, Chief technology officer Mira Murati also announced her resignation from the company, and was quickly followed by Chief Research Officer Bob McGrew and Barret Zoph, senior research executive, though Altman denies that their departures are due to the proposed restructuring plan.

The plan is reportedly still being vetted by the company’s lawyers and stakeholders. There is no word yet on when the restructuring might be completed.

OpenAI was founded in 2015 as a nonprofit research organization, then incorporated a for-profit subsidiary, OpenAI LP, in 2019 in order to secure funding from Microsoft. With the release of ChatGPT in 2022, OpenAI’s valuation has grown from $14 billion in 2021 to $150 billion in the most recent round of funding.

Andrew Tarantola
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
Copilot Pro: how to use Microsoft’s advanced AI sidekick
copilot pro logo

Microsoft's Copilot Pro is a game-changer for productivity and creativity, offering users advanced AI capabilities right at their fingertips. Whether you're a professional looking to streamline your workflow or a creator aiming to enhance your projects, Copilot Pro provides a suite of tools designed to supercharge your experience.

But with so many functionalities, how do you truly leverage Copilot Pro's potential? In this guide, we'll unveil a treasure trove of tips and tricks to maximize your Copilot Pro experience. We'll delve into crafting effective prompts to unlock the AI's true potential, explore lesser-known features for specific tasks, and optimize your workflow for seamless integration with Microsoft 365.
What is Microsoft Copilot Pro?

Read more
Nvidia just released an open-source LLM to rival GPT-4
Nvidia CEO Jensen in front of a background.

Nvidia, which builds some of the most highly sought-after GPUs in the AI industry, has announced that it has released an open-source large language model that reportedly performs on par with leading proprietary models from OpenAI, Anthropic, Meta, and Google.

The company introduced its new NVLM 1.0 family in a recently released white paper, and it's spearheaded by the 72 billion-parameter NVLM-D-72B model. “We introduce NVLM 1.0, a family of frontier-class multimodal large language models that achieve state-of-the-art results on vision-language tasks, rivaling the leading proprietary models (e.g., GPT-4o) and open-access models,” the researchers wrote.

Read more
California governor vetoes expansive AI safety bill
CA Gov Gavin Newsom speaking at a lecturn

California Gov. Gavin Newsom has vetoed SB 1047, the Safe and Secure Innovation for Frontier Artificial Models Act, arguing in a letter to lawmakers that it "establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology."

"I do not believe this is the best approach to protecting the public from real threats posed by the technology," he wrote. SB 1047 would have required "that a developer, before beginning to initially train a covered model … comply with various requirements, including implementing the capability to promptly enact a full shutdown … and implement a written and separate safety and security protocol.”

Read more