Skip to main content

OpenAI releases a new AI model, but it’s eye-wateringly expensive

OpenAI's new typeface OpenAI Sans
OpenAI

OpenAI has released its latest model, o1-pro, an updated version of its reasoning model o1 — but it’s not going to come cheap.

“It uses more compute than o1 to provide consistently better responses,” OpenAI said in its announcement. The company went on to say it offered new features including: “Supports vision, function calling, Structured Outputs, and works with the Responses and Batch APIs. ”

Recommended Videos

With the additional computing power required to run the new model, OpenAI says, inevitably comes a higher cost. But the price is considerably steep: $150 per 1 million input tokens, and $600 per 1 million output tokens. And as TechCrunch points out, such high costs put it at twice the price of OpenAI’s GPT-4.5 and ten times the price of the baseline o1.

“O1-pro in the API is a version of o1 that uses more computing to think harder and provide even better answers to the hardest problems,” an OpenAI spokesperson told TechCrunch. “After getting many requests from our developer community, we’re excited to bring it to the API to offer even more reliable responses.”

All of this makes it clear that OpenAI is aiming o1-pro at developers rather than everyday users. The model is currently available to select developers on tiers 1–5 (those who have spent a certain amount of money on OpenAI’s API services in the past), with higher tier developers able to send more requests within a given time period.

However, whether developers will be willing to shell out this much money for the new model remains unclear. When o1-pro was rolled out as part of ChatGPT Pro a few months ago, user response was not particularly positive.

Users on Reddit complained that the model is “pathetic” and that it looked good in benchmarks but wasn’t that useful in real world scenarios. Others disagreed, though, saying that they found o1-pro particularly good for programming especially when the model was given full and detailed instructions on exactly what they wanted their code to do.

You can access o1-pro on OpenAI’s development platform now, if you’re willing to spend the money.

Georgina Torbet
Georgina has been the space writer at Digital Trends space writer for six years, covering human space exploration, planetary…
OpenAI might start watermarking ChatGPT images — but only for free users
OpenAI press image

Everyone has been talking about ChatGPT's new image-generation feature lately, and it seems the excitement isn't over yet. As always, people have been poking around inside the company's apps and this time, they've found mentions of a watermark feature for generated images.

Spotted by X user Tibor Blaho, the line of code image_gen_watermark_for_free seems to suggest that the feature would only slap watermarks on images generated by free users -- giving them yet another incentive to upgrade to a paid subscription.

Read more
DeepSeek readies the next AI disruption with self-improving models
DeepSeek AI chatbot running on an iPhone.

Barely a few months ago, Wall Street’s big bet on generative AI had a moment of reckoning when DeepSeek arrived on the scene. Despite its heavily censored nature, the open source DeepSeek proved that a frontier reasoning AI model doesn’t necessarily require billions of dollars and can be pulled off on modest resources.

It quickly found commercial adoption by giants such as Huawei, Oppo, and Vivo, while the likes of Microsoft, Alibaba, and Tencent quickly gave it a spot on their platforms. Now, the buzzy Chinese company’s next target is self-improving AI models that use a looping judge-reward approach to improve themselves.

Read more
Meta’s latest open source AI models challenge GPT, Gemini, and Claude
Meta AI widget on Home Screen.

Meta has announced the latest iteration of its open-source AI model family Llama 4, which the brand has developed while competition in the generative AI industry continues to intensify.

The new AI family includes four models, and Meta detailed Llama 4 Scout, Llama 4 Maverick, and Llama 4 Behemoth. Meta detailed on its AI website that the models were trained on “large amounts of unlabeled text, image, and video data.” This indicates that the models will have varied multimodal capabilities.

Read more