Skip to main content

ChatGPT just improved its creative writing chops

a phone displaying the ChatGPT homepage on a beige background.
Sanket Mishra / Pexels

One of the great strengths of ChatGPT is its ability to aid in creative writing. ChatGPT’s latest large language model, GPT-4o, has received a bit of a performance boost, OpenAI announced Wednesday. Users can reportedly expect “more natural, engaging, and tailored writing to improve relevance & readability” moving forward.

GPT-4o got an update 🎉

The model’s creative writing ability has leveled up–more natural, engaging, and tailored writing to improve relevance & readability.

It’s also better at working with uploaded files, providing deeper insights & more thorough responses.

— OpenAI (@OpenAI) November 20, 2024

GPT-4o, not to be confused with 01 (formerly Project Strawberry), is OpenAI’s latest publicly available model, surpassing GPT-4 and GPT-3.5. GPT-4o was first released in May 2024 and offers users double the performance at half the resource cost of its direct predecessor, GPT-4-Turbo, as well as state-of-the-art benchmark results in voice, multilingual, and vision tasks. Not only is it more efficient than older versions but it offers a host of additional capabilities. The model’s rapid response pace makes it especially useful for real-time translation and conversation applications.

Recommended Videos

For example, ChatGPT’s Advanced Voice Mode would not be possible without the near-instantaneous text, voice, and audio inference that GPT-4o offers (and that previous models did not). GPT-4o boasts improved reasoning skills over its predecessors as well, able to interpret a user’s spoken tone and intention, understand unique aspects of their tone, pace, and mood, and reply in kind.

While GPT-4o is technically available to every OpenAI subscriber level, including the free tier, it is not available for unlimited use. Free-tier users will find that they can only access the new model a handful of times on ChatGPT before being shunted down to interact with the smaller GPT-4o-mini model. Plus, Teams, and Enterprise subscribers have a rate limit roughly five times higher.

GPT-4o-mini is based on the same training data as its larger sibling, but leverages fewer variables (not to mention fewer compute resources) in its inference operations. Being lighter-weight and more responsive, GPT-40-mini has proven useful in a variety of small-scale applications including computer code generation.

OpenAI claims GPT-4o mini is “the most capable and cost-efficient small model available today,” per CNBC, out-performing competitors like Google’s Gemini 1.5 Flash, Meta’s Llama 3 8b, or Anthropic’s Claude 3 Haiku in a variety of benchmarks. According to data from Artificial Analysis, 40-mini scored 82% on the MMLU reasoning benchmark, topping Gemini by 3% and Claude 3 Haiku by 7%.

Andrew Tarantola
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
ChatGPT prototypes its next strike against Google Search: browsers
ChatGPT on a laptop

ChatGPT developer OpenAI may be one step closer to creating a third-party search tool that integrates the chatbot into other websites as primary feature. If the project comes to fruition, OpenAI could target Google as both a search engine and web browser.

A source told The Information the project is a search tool called NLWeb, Natural Language Web, and that it is currently in a prototype phase. OpenAI has showcased the prototype to several potential partners in travel, retail, real estate, and food industries, with Conde Nast, Redfin, Eventbrite, and Priceline being named by brand. The tool would enable ChatGPT search features onto the websites of these brands' products and services.

Read more
ChatGPT’s latest model may be a regression in performance
chatGPT on a phone on an encyclopedia

According to a new report from Artificial Analysis, OpenAI's flagship large language model for ChatGPT, GPT-4o, has significantly regressed in recent weeks, putting the state-of-the-art model's performance on par with the far smaller, and notably less capable, GPT-4o-mini model.

This analysis comes less than 24 hours after the company announced an upgrade for the GPT-4o model. "The model’s creative writing ability has leveled up–more natural, engaging, and tailored writing to improve relevance & readability," OpenAI wrote on X. "It’s also better at working with uploaded files, providing deeper insights & more thorough responses." Whether those claims continue to hold up is now being cast in doubt.

Read more
ChatGPT already listens and speaks. Soon it may see as well
ChatGPT meets a dog

ChatGPT's Advanced Voice Mode, which allows users to converse with the chatbot in real time, could soon gain the gift of sight, according to code discovered in the platform's latest beta build. While OpenAI has not yet confirmed the specific release of the new feature, code in the ChatGPT v1.2024.317 beta build spotted by Android Authority suggests that the so-called "live camera" could be imminently forthcoming.

OpenAI had first shown off Advanced Voice Mode's vision capabilities for ChatGPT in May, when the feature was first launched in alpha. During a demo posted at the time, the system was able to identify that it was looking at a dog through the phone's camera feed, identify the dog based on past interactions, recognize the dog's ball, and associate the dog's relationship to the ball (i.e. playing fetch).

Read more