Skip to main content

OpenAI’s advanced ‘Project Strawberry’ model has finally arrived

chatGPT on a phone on an encyclopedia
Shantanu Kumar / Pexels

After months of speculation and anticipation, OpenAI has released the production version of its advanced reasoning model, Project Strawberry, which has been renamed “o1.” It is joined by a “mini” version (just as GPT-4o was) that will offer faster and more responsive interactions at the expense of leveraging a larger knowledge base.

It appears that o1 offers a mixed bag of technical advancements. It’s the first in OpenAI’s line of reasoning models designed to use humanlike deduction to answer complex questions on subjects — including science, coding, and math — faster than humans can.

Recommended Videos

For example, during testing, o1 was fed a qualifying exam for the International Mathematics Olympiad. While its predecessor, GPT-4o, only managed to correctly solve 13% of the problems presented, o1 got 83% of them right. In an online Codeforces competition, o1 scored in the 89th percentile. What’s more, o1 can respond to queries that stumped previous models (like, “which is bigger, 9.11 or 9.9?”). However, the company makes clear that this release is only a preview of the neophyte model’s full capabilities.

The new o1 “has been trained using a completely new optimization algorithm and a new training dataset specifically tailored for it,” OpenAI’s research lead, Jerry Tworek, told The Verge. Using a combination of reinforcement learning and “chain of thought” reasoning, o1 reportedly returns more accurate inferences than its predecessor. “We have noticed that this model hallucinates less,” Tworek said, however, “we can’t say we solved hallucinations.”

Both ChatGPT-Plus and Teams subscribers will be able to test out o1 and o1-mini beginning today. Enterprise and Edu subscribers should have access by next week.

The company says that o1-mini will eventually become available to free-tier users, though it did not specify a timeline. Developers will notice a steep increase in the API pricing for o1, compared to GPT-4o. Access to o1 will cost $15 per million input tokens (compared to $5 per million for GPT-4o) and $60 per million output tokens, four times more than 4o’s $5 per million fee. The real question is whether the new model thinks the word “strawberry” contains two R’s or three.

Andrew Tarantola
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
ChatGPT’s Advanced Voice feature is finally rolling out to Plus and Teams subscribers
The Advanced Voice Mode's UI

OpenAI announced via Twitter on Tuesday that it will begin rolling out its Advanced Voice feature, as well as five new voices for the conversational AI, to subscribers of the Plus and Teams tiers throughout this week. Enterprise and Edu subscribers will gain access starting next week.

https://x.com/OpenAI/status/1838642444365369814

Read more
ChatGPT: the latest news and updates on the AI chatbot that changed everything
ChatGPT app running on an iPhone.

In the ever-evolving landscape of artificial intelligence, ChatGPT stands out as a groundbreaking development that has captured global attention. From its impressive capabilities and recent advancements to the heated debates surrounding its ethical implications, ChatGPT continues to make headlines.

Whether you're a tech enthusiast or just curious about the future of AI, dive into this comprehensive guide to uncover everything you need to know about this revolutionary AI tool.
What is ChatGPT?
ChatGPT (which stands for Chat Generative Pre-trained Transformer) is an AI chatbot, meaning you can ask it a question using natural language prompts and it will generate a reply. Unlike less-sophisticated voice assistant like Siri or Google Assistant, ChatGPT is driven by a large language model (LLM). These neural networks are trained on huge quantities of information from the internet for deep learning — meaning they generate altogether new responses, rather than just regurgitating canned answers. They're not built for a specific purpose like chatbots of the past — and they're a whole lot smarter. The current version of ChatGPT is based on the GPT-4 model, which was trained on all sorts of written content including websites, books, social media, news articles, and more — all fine-tuned in the language model by both supervised learning and RLHF (Reinforcement Learning From Human Feedback).
When was ChatGPT released?
OpenAI released ChatGPT in November 2022. When it launched, the initial version of ChatGPT ran atop the GPT-3.5 model. In the years since, the system has undergone a number of iterative advancements with the current version of ChatGPT using the GPT-4 model family. GPT-5 is reportedly just around the corner. GPT-3 was first launched in 2020, GPT-2 released the year prior to that, though neither were used in the public-facing ChatGPT system.
Upon its release, ChatGPT's popularity skyrocketed literally overnight. It grew to host over 100 million users in its first two months, making it the most quickly-adopted piece of software ever made to date, though this record has since been beaten by the Twitter alternative, Threads. ChatGPT's popularity dropped briefly in June 2023, reportedly losing 10% of global users, but has since continued to grow exponentially.
How to use ChatGPT
First, go to chatgpt.com. If you'd like to maintain a history of your previous chats, sign up for a free account. You can use the system anonymously without a login if you prefer. Users can opt to connect their ChatGPT login with that of their Google-, Microsoft- or Apple-backed accounts as well. At the sign up screen, you'll see some basic rules about ChatGPT, including potential errors in data, how OpenAI collects data, and how users can submit feedback. If you want to get started, we have a roundup of the best ChatGPT tips.

Read more
ChatGPT’s resource demands are getting out of control
a server

It's no secret that the growth of generative AI has demanded ever increasing amounts of water and electricity, but a new study from The Washington Post and researchers from University of California, Riverside shows just how many resources OpenAI's chatbot needs in order to perform even its most basic functions.

In terms of water usage, the amount needed for ChatGPT to write a 100-word email depends on the state and the user's proximity to OpenAI's nearest data center. The less prevalent water is in a given region, and the less expensive electricity is, the more likely the data center is to rely on electrically powered air conditioning units instead. In Texas, for example, the chatbot only consumes an estimated 235 milliliters needed to generate one 100-word email. That same email drafted in Washington, on the other hand, would require 1,408 milliliters (nearly a liter and a half) per email.

Read more