Skip to main content

ChatGPT now interprets photos better than an art critic and an investigator combined

chatgpt visual intelligence with o3 model.
OpenAI

ChatGPT’s recent image generation capabilities have challenged our previous understanding of AI-generated media. The recently announced GPT-4o model demonstrates noteworthy abilities of interpreting images with high accuracy and recreating them with viral effects, such as that inspired by Studio Ghibli. It even masters text in AI-generated images, which has previously been difficult for AI. And now, it is launching two new models capable of dissecting images for cues to gather far more information that might even fail a human glance.

OpenAI announced two new models earlier this week that take ChatGPT’s thinking abilities up a notch. Its new o3 model, which OpenAI calls its “most powerful reasoning model” improves on the existing interpretation and perception abilities, getting better at “coding, math, science, visual perception, and more,” the organization claims. Meanwhile, the o4-mini is a smaller and faster model for “cost-efficient reasoning” in the same avenues. The news follows OpenAI’s recent launch of the GPT-4.1 class of models, which brings faster processing and deeper context.

Recommended Videos

ChatGPT is now “thinking with images”

With improvements to their abilities to reason, both models can now incorporate images in their reasoning process, which makes them capable of “thinking with images,” OpenAI proclaims. With this change, both models can integrate images in their chain of thought. Going beyond basic analysis of images, the o3 and o4-mini models can investigate images more closely and even manipulate them through actions such as cropping, zooming, flipping, or enriching details to fetch any visual cues from the images that could potentially improve ChatGPT’s ability to provide solutions.

Introducing OpenAI o3 and o4-mini—our smartest and most capable models to date.

For the first time, our reasoning models can agentically use and combine every tool within ChatGPT, including web search, Python, image analysis, file interpretation, and image generation. pic.twitter.com/rDaqV0x0wE

— OpenAI (@OpenAI) April 16, 2025

With the announcement, it is said that the models blend visual and textual reasoning, which can be integrated with other ChatGPT features such as web search, data analysis, and code generation, and is expected to become the basis for a more advanced AI agents with multimodal analysis.

Among other practical applications, you can expect to include pictures of a multitude of items, such flow charts or scribble from handwritten notes to images of real-world objects, and expect ChatGPT to have a deeper understanding for a better output, even without a descriptive text prompt. With this, OpenAI is inching closer to Google’s Gemini, which offers the impressive ability to interpret the real world through live video.

Despite bold claims, OpenAI is limiting access only to paid members, presumably to prevent its GPUs from “melting” again, as it struggles to keep up the compute demand for new reasoning features. As of now, the o3, o4-mini, and o4-mini-high models will be exclusively available to ChatGPT Plus, Pro, and Team members while Enterprise and Education tier users get it in one week’s time. Meanwhile, Free users will be able to limited access to o4-mini when they select the “Think” button in the prompt bar.

Tushar Mehta
Tushar is a freelance writer at Digital Trends and has been contributing to the Mobile Section for the past three years…
Why writing with ChatGPT actually makes my life harder
ChatGPT prompt bar.

I remember when ChatGPT first appeared, and the first thing everyone started saying was "Writers are done for." People started speculating about news sites, blogs, and pretty much all written internet content becoming AI-generated -- and while those predictions seemed extreme to me, I was also pretty impressed by the text GPT could produce.

Naturally, I had to try out the fancy new tool for myself but I quickly discovered that the results weren't quite as impressive as they seemed. Fast forward more than two years, and as far as my experience and my use cases go, nothing has changed: whenever I use ChatGPT to help with my writing, all it does is slow me down and leave me frustrated.

Read more
Fun things to ask ChatGPT now that it remembers everything
ChatGPT on a laptop

If you hadn't heard, ChatGPT's memory just got a whole lot better. Rolled out across the world to Plus and Pro users over the past few days, ChatGPT's various models can now reference almost any past conversation you had. It doesn't remember everything word for word, but can pull significant details, themes, and important points of reference from just about anything you've ever said to it.

It feels a little creepy at times, but ChatGPT can now be used for much more personalized tasks. OpenAI pitches this as a way to improve its scheduling feature to use it as a personal assistant, or to help you continue longer chats over extended periods of time. But it's also quite fun to see what ChatGPT can tell you by trawling throughh all your chatlogs. It's often surprising some of the answers it spits out in response.

Read more
You can now view all of your ChatGPT-generated images in one place
ChatGPT library promotion video.

OpenAI did text generation and image generation separately for quite a while, but that all changed a couple of weeks ago when it added image capabilities directly into ChatGPT. Now, a small but powerful Quality of Life update gives users access to an image library where they can see all of the insane things they've created.

https://twitter.com/OpenAI/status/1912255254512722102

Read more