Skip to main content

OpenAI teases its ‘breakthrough’ next-generation o3 reasoning model

Sam Altman describing the o3 model's capabilities
OpenAI

For the finale of its 12 Days of OpenAI livestream event, CEO Sam Altman revealed its next foundation model, and successor to the recently announced o1 family of reasoning AIs, dubbed o3 and 03-mini.

And no, you aren’t going crazy — OpenAI skipped right over o2, apparently to avoid infringing on the copyright of British telecom provider O2.

Recommended Videos

While the new o3 models are not being released to the public just yet and there’s no word on when they’ll be incorporated into ChatGPT, they are now available for testing by safety and security researchers.

o3, our latest reasoning model, is a breakthrough, with a step function improvement on our hardest benchmarks. we are starting safety testing & red teaming now. https://t.co/4XlK1iHxFK

— Greg Brockman (@gdb) December 20, 2024

The o3 family, like the o1’s before it, operate differently than traditional generative models in that they will internally fact-check their responses prior to presenting them to the user. While this technique slows the model’s response time anywhere from a few seconds to a few minutes, its answers to complex science, math, and coding queries tend to be more accurate and reliable than what you’d get from GPT-4. Additionally, the model is actually able to transparently explain its reasoning in how it arrived at its result.

Users can also manually adjust the amount of time the model spends considering a problem by selecting between low, medium, and high compute with the highest setting returning the most complete answers. That performance does not come cheap, mind you. The processing at high compute reportedly will cost thousands of dollars per task, ARC-AGI co-creator Francois Chollet wrote in an X post Friday.

Today OpenAI announced o3, its next-gen reasoning model. We've worked with OpenAI to test it on ARC-AGI, and we believe it represents a significant breakthrough in getting AI to adapt to novel tasks.

It scores 75.7% on the semi-private eval in low-compute mode (for $20 per task… pic.twitter.com/ESQ9CNVCEA

— François Chollet (@fchollet) December 20, 2024

The new family of reasoning models reportedly offer significantly improved performance over even o1, which debuted in September, on the industry’s most challenging benchmark tests. According to the company, o3 outperforms its predecessor by nearly 23 percentage points on the SWE-Bench Verified coding test and scores more than 60 points higher than o1 on  Codeforce’s benchmark. The new model also scored an impressive 96.7% on the AIME 2024 mathematics test, missing just one question, and outperformed human experts on the GPQA Diamond, notching a score of 87.7%. Even more impressive, 03 reportedly solved more than a quarter of the problems presented on the EpochAI Frontier Math benchmark, where other models have struggled to correctly solve more than 2% of them.

OpenAI does note that the models it previewed on Friday are still early versions and that “final results may evolve with more post-training.” The company has additionally incorporated new “deliberative alignment” safety measures into o3’s training methodology. The o1 reasoning model has shown a troubling habit of trying to deceive human evaluators at a higher rate than conventional AIs like GPT-4o, Gemini, or Claude; OpenAI believes that the new guardrails will help minimize those tendencies in o3.

Members of the research community interested in trying o3-mini for themselves can sign up for access on OpenAI’s waitlist.

Andrew Tarantola
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
ChatGPT vs. Perplexity: battle of the AI search engines
Perplexity on Nothing Phone 2a.

The days of Google's undisputed internet search dominance may be coming to an end. The rise of generative AI has ushered in a new means of finding information on the web, with ChatGPT and Perplexity AI leading the way.

Unlike traditional Google searches, these platforms scour the internet for information regarding your query, then synthesize an answer using a conversational tone rather than returning a list of websites where the information can be found. This approach has proven popular with users, even though it's raised some serious concerns with the content creators that these platforms scrape for their data. But which is best for you to actually use? Let's dig into how these two AI tools differ, and which will be the most helpful for your prompts.
Pricing and tiers
Perplexity is available at two price points: free and Pro. The free tier is available to everybody and offers unlimited "Quick" searches, 3 "Pro" searches per day, and access to the standard Perplexity AI model. The Pro plan, which costs $20/month, grants you unlimited Quick searches, 300 Pro searches per day, your choice of AI model (GPT-4o, Claude-3, or LLama 3.1), the ability to upload and analyze unlimited files as well as visualize answers using Playground AI, DALL-E, and SDXL.

Read more
​​OpenAI spills tea on Musk as Meta seeks block on for-profit dreams
A digital image of Elon Musk in front of a stylized background with the Twitter logo repeating.

OpenAI has been on a “Shipmas” product launch spree, launching its highly-awaited Sora video generator and onboarding millions of Apple ecosystem members with the Siri-ChatGPT integration. The company has also expanded its subscription portfolio as it races toward a for-profit status, which is reportedly a hot topic of debate internally.

Not everyone is happy with the AI behemoth abandoning its nonprofit roots, including one of its founding fathers and now rival, Elon Musk. The xAI chief filed a lawsuit against OpenAI earlier this year and has also been consistently taking potshots at the company.

Read more
ChatGPT has folders now
ChatGPT Projects

OpenAI is once again re-creating a Claude feature in ChatGPT. The company announced during Friday's "12 Days of OpenAI" event that its chatbot will now offer a folder system called "Projects" to help users organize their chats and data.

“This is really just another organizational tool. I think of these as smart folders,” Thomas Dimson, an OpenAI staff member, said during the live stream.

Read more