Skip to main content

Turns out, it’s not that hard to do what OpenAI does for less

OpenAI's new typeface OpenAI Sans
OpenAI

Even as OpenAI continues clinging to its assertion that the only path to AGI lies through massive financial and energy expenditures, independent researchers are leveraging open-source technologies to match the performance of its most powerful models — and do so at a fraction of the price.

Last Friday, a unified team from Stanford University and the University of Washington announced that they had trained a math and coding-focused large language model that performs as well as OpenAI’s o1 and DeepSeek’s R1 reasoning models. It cost just $50 in cloud compute credits to build. The team reportedly used an off-the-shelf base model, then distilled Google’s Gemini 2.0 Flash Thinking Experimental model into it. The process of distilling AIs involves pulling the relevant information to complete a specific task from a larger AI model and transferring it to a smaller one.

Recommended Videos

What’s more, on Tuesday, researchers from Hugging Face released a competitor to OpenAI’s Deep Research and Google Gemini’s (also) Deep Research tools, dubbed Open Deep Research, which they developed in just 24 hours. “While powerful LLMs are now freely available in open-source, OpenAI didn’t disclose much about the agentic framework underlying Deep Research,” Hugging Face wrote in its announcement post. “So we decided to embark on a 24-hour mission to reproduce their results and open-source the needed framework along the way!” It reportedly costs an estimated $20 in cloud compute credits, and would require less than 30 minutes, to train.

Hugging Face’s model subsequently notched a 55% accuracy on the General AI Assistants (GAIA) benchmark, which is used to test the capacities of agentic AI systems. By comparison, OpenAI’s Deep Research scored between 67 – 73% accuracy, depending on the response methodologies. Granted, the 24-hour model doesn’t perform quite as well as OpenAI’s offering, but it also didn’t take billions of dollars and the energy generation capacity of a mid-sized European nation to train.

These efforts follow news from January that a team out of University of California, Berkeley’s Sky Computing Lab managed to train their Sky T1 reasoning model for around $450 in cloud compute credits. The team’s Sky-T1-32B-Preview model proved the equal of early o1-preview reasoning model release. As more of these open-source competitors to OpenAI’s industry dominance emerge, their mere existence calls into question whether the company’s plan of spending half a trillion dollars to build AI data centers and energy production facilities is really the answer.

Andrew Tarantola
Former Computing Writer
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
Your politeness toward ChatGPT is increasing OpenAI’s energy costs 
ChatGPT's Advanced Voice Mode on a smartphone.

Everyone’s heard the expression, “Politeness costs nothing,” but with the advent of AI chatbots, it may have to be revised.

Just recently, someone on X wondered how much OpenAI spends on electricity at its data centers to process polite terms like “please” and “thank you” when people engage with its ChatGPT chatbot.

Read more
Meta is training AI on your data. Users say opting out doesn’t work.
Meta AI WhatsApp widget.

Imagine a tech giant telling you that it wants your Instagram and Facebook posts to train its AI models. And that too, without any incentive. You could, however, opt out of it, as per the company. But as you proceed with the official tools to back out and prevent AI from gobbling your social content, they simply don’t work. 

That’s what users of Facebook and Instagram are now reporting. Nate Hake, publisher and founding chief of Travel Lemming, shared that he got an email from Meta about using his social media content for AI training. However, the link to the opt-out form provided by Meta doesn’t work.

Read more
Apple is hoping your emails will fix its misfiring AI
Categories in Apple Mail app.

Apple’s AI efforts haven’t made the same kind of impact as Google’s Gemini, Microsoft Copilot, or OpenAI’s ChatGPT. The company’s AI stack, dubbed Apple Intelligence, hasn’t moved the functional needle for iPhone and Mac users, even triggering an internal management crisis at the company. 

It seems user data could rescue the sinking ship. Earlier today, the company published a Machine Learning research paper that details a new approach to train its onboard AI using data stored on your iPhone, starting with emails. These emails will be used to improve features such as email summarization and Writing Tools. 

Read more