Skip to main content

OpenAI opens up developer access to the full o1 reasoning model

The openAI o1 logo
OpenAI

On the ninth day of OpenAI’s holiday press blitz, the company announced that it is releasing the full version of its o1 reasoning model to select developers through the company’s API. Until Tuesday’s news, devs could only access the less-capable o1-preview model.

According to the company, the full o1 model will begin rolling out to folks in OpenAI’s “Tier 5” developer category. Those are users that have had an account for more than a month and who spend at least $1,000 with the company. The new service is especially pricey for users (on account of the added compute resources o1 requires), costing $15 for every (roughly) 750,000 words analyzed and $60 for every (roughly) 750,000 words generated by the model. That’s three to four times the cost of performing the same tasks with GPT-4o.

Recommended Videos

At those prices, OpenAI made sure to improve the full model’s capabilities over the preview iteration’s. The new o1 model is more customizable than its predecessor (its new “reasoning_effort” parameter dictates how long the AI ponders a given question) and offers function calling, developer messages, and image analysis, all of which were missing from the o1-preview.

The company also announced that it is incorporating its GPT-4o and 4o-mini models into its Realtime API, which is built for low-latency, vocal-AI applications (like Advanced Voice Mode). The API also now supports WebRTC, the industry’s open standard for developing vocal-AI applications in web browsers, so get ready for a whole bunch more websites trying to talk to you come 2025.

“Our WebRTC integration is designed to enable smooth and responsive interactions in real-world conditions, even with variable network quality,” OpenAI wrote in its announcement. “It handles audio encoding, streaming, noise suppression, and congestion control.”

OpenAI has so far, as part of the live-stream event, unveiled the full version of o1 (in addition to Tuesday’s announcement), released its Sora video generation model, debuted it new Projects feature, and provided multiple updates to its Canvas, Search and Advanced Voice Mode features.

With only three days left before the event’s finale, what will OpenAI show off next? We’ll have to wait and see.

Andrew Tarantola
Former Computing Writer
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
The original AI model behind ChatGPT will live on in your favorite apps
OpenAI press image

OpenAI has released its GPT‑3.5 Turbo API to developers as of Monday, bringing back to life the base model that powered the ChatGPT chatbot that took the world by storm in 2022. It will now be available for use in several well-known apps and services. The AI brand has indicated that the model comes with several optimizations and will be cheaper for developers to build upon, making the model a more efficient option for features on popular applications, including Snapchat and Instacart. 

Apps supporting GPT‑3.5 Turbo API

Read more
Your politeness toward ChatGPT is increasing OpenAI’s energy costs 
ChatGPT's Advanced Voice Mode on a smartphone.

Everyone’s heard the expression, “Politeness costs nothing,” but with the advent of AI chatbots, it may have to be revised.

Just recently, someone on X wondered how much OpenAI spends on electricity at its data centers to process polite terms like “please” and “thank you” when people engage with its ChatGPT chatbot.

Read more
Kagi’s AI search assistant gives you access to all the big models in one place
Kagi search bar in light mode.

Kagi's "Assistant" feature, previously only available to Ultimate subscribers, is now rolling out to all tiers -- including the free trial tier. The feature gives you access to a range of different LLMs for both chatting and web-searching purposes.

If you don't know much about Kagi, it's a paid search engine that borrows its name from the Japanese word for "key." The concept is simple -- with Google, you pay for the service by allowing ads and data collection. With Kagi, you pay for the service with money to get a private and ad-free experience.

Read more