Skip to main content

GPT-4 vs. ChatGPT: just how much better is the latest version?

A laptop opened to the ChatGPT website.
Shutterstock

GPT-4 is the latest language model for the ChatGPT AI chatbot, and despite just being released, it’s already making waves. The new model is smarter in a number of exciting ways, most notably its ability to understand images, and it can also process over eight times as many words as its predecessor. It’s a lot harder to fool now as well.

You’ll need to pay to use the new version though, as for now, it’s locked behind the ChatGPT Plus subscription.

Recommended Videos

How do you use GPT-4 and ChatGPT?

A laptop opened to the ChatGPT website.
Shutterstock

The easiest way to access ChatGPT is through the official OpenAI ChatGPT website. There’s a lot of interest in it at the moment, and OpenAI’s servers regularly hit capacity, so you may have to wait for a spot to open up to use it, but just refresh a few times and you should be able to gain access.

If you don’t want to wait, you can sign up for a ChatGPT Plus subscription. That gives you priority access, and you should be able to use ChatGPT whenever you want if you’re a paid member. However, there is a waitlist for new subscribers at this time, so you may have to wait a little while anyway.

You’ll also need to sign up if you want to use GPT-4. The default, free version of ChatGPT is currently running GPT 3.5, a modified version of the GPT3 model that’s been in use since 2020. GPT-4 is, for now, a subscriber-only feature, though as it sees greater development, it may well become more widely available.

What can GPT-4 do better than ChatGPT?

GPT-4 is a next-generation language model for the AI chatbot, and though OpenAI isn’t being specific about what changes it’s made to the underlying model, it is keen to highlight how much improved it is over its predecessor. OpenAI claims that it can process up to 25,000 words at a time — that’s eight times more than the original GPT-3 model — and it can understand much more nuanced instructions, requests, and questions than GPT-3.5, the model used in the existing ChatGPT AI.

OpenAI also assures us that GPT-4 will be much harder to trick, won’t spit out falsehoods as often, and is more likely to turn down inappropriate requests or queries that could see it generate harmful responses.

But GPT-4 also has some exciting new abilities that early adopters are already putting to good use.

GPT-4 can understand images

GPT-4 is a multimodal language model AI, which means it can understand text and other media, like images. This might sound familiar if you’re had a go with Stable Diffusion AI art generation, but it’s more capable than that, as it can respond to images and queries. This has led to some exciting uses, like GPT-4 creating a website based on a quick sketch.. or being able to suggest recipes for a user after analyzing an image of the ingredients they have to hand.

Now let's get into the details.

GPT-4 is multimodal and it now accepts the images as inputs and generates captions, classifications, and analyses. 🔥

Below is one such example of giving an input image of ingredients and asking GPT-4 to generate a list of recipes. pic.twitter.com/mJMq8zLgkk

— Sumanth (@Sumanth_077) March 15, 2023

It’s getting much better at programming

ChatGPT has already shown itself a capable programmer, but GPT-4 takes it to a while new level. Early users have managed to get it to make them basic games in just a few minutes. Both Snake and Pong were recreated from scratch, despite the users having next to no experience with programming.

It can pass exams

ChatGPT was good at acting like a human, but put it under stress, and you could often see the cracks and the seams. But with GPT-4, that’s much less likely to happen. In fact, it can perform so well on tests for humans that GPT-4 was able to pass the Uniform bar exam in the 90th percentile of test takers. It also passed the Biology Olympiad test in the 99th percentile. In comparison, ChatGPT was only able to do so in the 31st percentile.

GPT-4 can create its own lawsuits

The combination of improved reasoning and text comprehension has a lot of potential for the DoNotPay team. It’s working on using GPT-4 to generate “one-click lawsuits,” where robocallers would be sued if they spam you. Such a system could also be used to scan medical bills and identify errors, or compare prices with other hospitals to help get bills down. It could then even draft a legal defense using the No Surprises Act.

It can understand humor

GPT-4 is much better at understanding what makes something funny. Not only can it tell better jokes when asked, but if you show it a meme or other funny image and ask it to explain what’s funny about it, it can understand what’s going on and explain it to you.

GPT-4 limitations

Like ChatGPT before it, GPT-4 isn’t perfect. It’s certainly a worthy competitor for Google Bard, but it still has a ways to go before it doesn’t make mistakes at all and can do just about anything.

At the time of writing, GPT-4 is trained on data that was collected up until August 2022, so it has no knowledge beyond that date. That creates severe limitations on what the AI can do, and means that as time goes on, it becomes less accurate due to lacking the most up-to-date information.

Like its predecessor language models, GPT-4 is also prone to “hallucinations,” where it claims inaccurate information as fact. This reportedly happens a lot less with this model, but it’s not immune, raising concerns over its use in accuracy-sensitive environments. It’s also quite limited in its ability to learn from experience, so it may continue to make the same errors, even when they are pointed out to it.

GPT-4 is currently limited to 100 messages every four hours, even for ChatGPT Plus subscribers, and if you aren’t already a member, you’ll have to join the waitlist and state your reasons for wanting to use it.

Jon Martindale
Jon Martindale is a freelance evergreen writer and occasional section coordinator, covering how to guides, best-of lists, and…
Chatbots are going to Washington with ChatGPT Gov
glasses and chatgpt

In an X post Monday commenting on DeepSeek's sudden success, OpenAI CEO Sam Altman promised to "pull up some releases" and it appears he has done so. OpenAI unveiled its newest product on Tuesday, a "tailored version of ChatGPT designed to provide U.S. government agencies with an additional way to access OpenAI’s frontier models," per the announcement post. ChatGPT Gov will reportedly offer even tighter data security measures than ChatGPT Enterprise, but how will it handle the hallucinations that plague the company's other models?

According to OpenAI, more than 90,000 federal, state, and local government employees across 3,500 agencies have queried ChatGPT more than 18 million times since the start of 2024. The new platform will enable government agencies to enter “non-public, sensitive information” into ChatGPT while it runs within their secure hosting environments -- specifically, the Microsoft Azure commercial cloud or Azure Government community cloud -- and cybersecurity frameworks like IL5 or CJIS. This enables each agency to "manage their own security, privacy and compliance requirements,” Felipe Millon, Government Sales lead at OpenAI told reporters on the press call Tuesday.

Read more
How DeepSeek flipped the tech world on its head overnight
The DeepSeek website.

DeepSeek, the chatbot made by a Chinese startup that seemingly dethroned ChatGPT, is taking the world by storm. It's currently the number one topic all over the news, and a lot has happened in the past 24 hours. Among other highlights, Nvidia's stock plummeted as a response to DeepSeek; President Donald Trump commented on the new AI; Mark Zuckerberg is assembling a team to find an answer to DeepSeek. Below, we'll cover all the latest news you need to know about DeepSeek.
Nvidia gets hit by the rise of DeepSeek

Although ChatGPT is the chatbot that quickly lost its public favorite status with the rise of DeepSeek, Nvidia is the company that suffered the greatest losses. In fact, Nvidia's market loss following the launch of DeepSeek's large language model (LLM) marks the greatest one-day stock market drop in history, says Forbes. Nvidia lost nearly $600 billion as a result of the Chinese company behind DeepSeek revealing just how cheap the new LLM is to develop in comparison to rivals from Anthropic, Meta, or OpenAI.

Read more
DeepSeek: everything you need to know about the AI that dethroned ChatGPT
robot hand in point space

A year-old startup out of China is taking the AI industry by storm after releasing a chatbot which rivals the performance of ChatGPT while using a fraction of the power, cooling, and training expense of what OpenAI, Google, and Anthropic's systems demand. Here's everything you need to know about Deepseek's V3 and R1 models and why the company could fundamentally upend America's AI ambitions.
What is DeepSeek?
DeepSeek (technically, "Hangzhou DeepSeek Artificial Intelligence Basic Technology Research Co., Ltd.") is a Chinese AI startup that was originally founded as an AI lab for its parent company, High-Flyer, in April, 2023. That May, DeepSeek was spun off into its own company (with High-Flyer remaining on as an investor) and also released its DeepSeek-V2 model. V2 offered performance on par with other leading Chinese AI firms, such as ByteDance, Tencent, and Baidu, but at a much lower operating cost.

The company followed up with the release of V3 in December 2024. V3 is a 671 billion-parameter model that reportedly took less than 2 months to train. What's more, according to a recent analysis from Jeffries, DeepSeek's “training cost of only US$5.6m (assuming $2/H800 hour rental cost). That is less than 10% of the cost of Meta’s Llama.” That's a tiny fraction of the hundreds of millions to billions of dollars that US firms like Google, Microsoft, xAI, and OpenAI have spent training their models.

Read more