Skip to main content

GPT-4 Turbo is the biggest update since ChatGPT’s launch

A person typing on a laptop that is showing the ChatGPT generative AI website.
Matheus Bertelli / Pexels

OpenAI has just unveiled the latest updates to its large language models (LLM) during its first developer conference, and the most notable improvement is the release of GPT-4 Turbo, which is currently entering preview. GPT-4 Turbo comes as an update to the existing GPT-4, bringing with it a greatly increased context window and access to much newer knowledge. Here’s everything you need to know about GPT-4 Turbo.

OpenAI claims that the AI model will be more powerful while simultaneously being cheaper than its predecessors. Unlike the previous versions, it’s been trained on information dating to April 2023. That’s a hefty update on its own — the latest version maxed out in September 2021. I just tested this myself, and indeed, using GPT-4 allows ChatGPT to draw information from events that happened up until April 2023, so that update is already live.

GPT-4 Turbo has a significantly larger context window than the previous versions. This is essentially what GPT-4 Turbo takes into consideration before it generates any text in reply. To that end, it now has a 128,000-token (this is the unit of text or code that LLMs read) context window, which, as OpenAI reveals in its blog post, is the equivalent of around 300 pages of text.

That’s an entire novel that you could potentially feed to ChatGPT over the course of a single conversation, and a much greater context window than the previous versions had (8,000 and 32,000 tokens).

Context windows are important for LLMs because they help them stay on topic. If you interact with large language models, you’ll find that they may go off topic if the conversation goes on for too long. This can produce some pretty unhinged and unnerving responses, such as that time when Bing Chat told us that it wanted to be human. GPT-4 Turbo, if all goes well, should keep the insanity at bay for a much longer time than the current model.

GPT-4 Turbo is also going to be cheaper to run for developers, with the cost reduced to $0.01 per 1,000 input tokens, which rounds up to roughly 750 words, while outputs will cost $0.03 per 1,000 tokens. OpenAI estimates that this new version is three times cheaper than the ones that came before it.

The company also says that GPT-4 Turbo does a better job of following instructions carefully, and can be told to use the coding language of choice to produce results, such as XML or JSON. GPT-4 Turbo will also support images and text-to-speech, and it still offers DALL-E 3 integration.

A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.
Rolf van Root / Unsplash

This wasn’t the only big reveal for OpenAI, which also introduced GPTs, custom versions of ChatGPT that anyone can make for their own specific purpose with no knowledge of coding. These GPTs can be made for personal or company use, but can also be distributed to others. OpenAI says that GPTs are available today for ChatGPT Plus subscribers and enterprise users.

Lastly, in light of constant copyright concerns, OpenAI joins Google and Microsoft in saying that it will take legal responsibility if its customers are sued for copyright infringement.

With the enormous context window, the new copyright shield, and an improved ability to follow instructions, GPT-4 Turbo might turn out to be both a blessing and a curse. ChatGPT is fairly good at not doing things it shouldn’t do, but even still, it has a dark side. This new version, while infinitely more capable, may also come with the same drawbacks as other LLMs, except this time, it’ll be on steroids.

Editors' Recommendations

Monica J. White
Monica is a UK-based freelance writer and self-proclaimed geek. A firm believer in the "PC building is just like expensive…
Copilot: how to use Microsoft’s own version of ChatGPT
Microsoft's AI Copilot being used in various Microsoft Office apps.

ChatGPT isn’t the only AI chatbot in town. One direct competitor is Microsoft’s Copilot (formerly Bing Chat), and if you’ve never used it before, you should definitely give it a try. As part of a greater suite of Microsoft tools, Copilot can be integrated into your smartphone, tablet, and desktop experience, thanks to a Copilot sidebar in Microsoft Edge. 

Like any good AI chatbot, Copilot’s abilities are constantly evolving, so you can always expect something new from this generative learning professional. Today though, we’re giving a crash course on where to find Copilot, how to download it, and how you can use the amazing bot. 
How to get Microsoft Copilot
Microsoft Copilot comes to Bing and Edge. Microsoft

Read more
GPTZero: how to use the ChatGPT detection tool
A MidJourney rendering of a student and his robot friend in front of a blackboard.

In terms of world-changing technologies, ChatGPT has truly made a massive impact on the way people think about writing and coding in the short time that it's been available. Being able to plug in a prompt and get out a stream of almost good enough text is a tempting proposition for many people who aren't confident in their writing skills or are looking to save time. However, this ability has come with a significant downside, particularly in education, where students are tempted to use ChatGPT for their own papers or exams. That prevents them from learning as much as they could, which has given teachers a whole new headache when it comes to detecting AI use.

Teachers and other users are now looking for ways to detect the use of ChatGPT in students' work, and many are turning to tools like GPTZero, a ChatGPT detection tool built by Princeton University student Edward Tian. The software is available to everyone, so if you want to try it out and see the chances that a particular piece of text was written using ChatGPT, here's how you can do that.
What is GPTZero?

Read more
Is ChatGPT safe? Here are the risks to consider before using it
A response from ChatGPT on an Android phone.

For those who have seen ChatGPT in action, you know just how amazing this generative AI tool can be. And if you haven’t seen ChatGPT do its thing, prepare to have your mind blown! 

There’s no doubting the power and performance of OpenAI’s famous chatbot, but is ChatGPT actually safe to use? While tech leaders the world over are concerned over the evolutionary development of AI, these global concerns don’t necessarily translate to an individual user experience. With that being said, let’s take a closer look at ChatGPT to help you hone in on your comfort level.
Privacy and financial leaks
In at least one instance, chat history between users was mixed up. On March 20, 2023, ChatGPT creator OpenAI discovered a problem, and ChatGPT was down for several hours. Around that time, a few ChatGPT users saw the conversation history of other people instead of their own. Possibly more concerning was the news that payment-related information from ChatGPT-Plus subscribers might have leaked as well.

Read more