Skip to main content

GPT-4 Turbo is the biggest update since ChatGPT’s launch

A person typing on a laptop that is showing the ChatGPT generative AI website.
Matheus Bertelli / Pexels

OpenAI has just unveiled the latest updates to its large language models (LLM) during its first developer conference, and the most notable improvement is the release of GPT-4 Turbo, which is currently entering preview. GPT-4 Turbo comes as an update to the existing GPT-4, bringing with it a greatly increased context window and access to much newer knowledge. Here’s everything you need to know about GPT-4 Turbo.

Recommended Videos

OpenAI claims that the AI model will be more powerful while simultaneously being cheaper than its predecessors. Unlike the previous versions, it’s been trained on information dating to April 2023. That’s a hefty update on its own — the latest version maxed out in September 2021. I just tested this myself, and indeed, using GPT-4 allows ChatGPT to draw information from events that happened up until April 2023, so that update is already live.

GPT-4 Turbo has a significantly larger context window than the previous versions. This is essentially what GPT-4 Turbo takes into consideration before it generates any text in reply. To that end, it now has a 128,000-token (this is the unit of text or code that LLMs read) context window, which, as OpenAI reveals in its blog post, is the equivalent of around 300 pages of text.

That’s an entire novel that you could potentially feed to ChatGPT over the course of a single conversation, and a much greater context window than the previous versions had (8,000 and 32,000 tokens).

Context windows are important for LLMs because they help them stay on topic. If you interact with large language models, you’ll find that they may go off topic if the conversation goes on for too long. This can produce some pretty unhinged and unnerving responses, such as that time when Bing Chat told us that it wanted to be human. GPT-4 Turbo, if all goes well, should keep the insanity at bay for a much longer time than the current model.

GPT-4 Turbo is also going to be cheaper to run for developers, with the cost reduced to $0.01 per 1,000 input tokens, which rounds up to roughly 750 words, while outputs will cost $0.03 per 1,000 tokens. OpenAI estimates that this new version is three times cheaper than the ones that came before it.

The company also says that GPT-4 Turbo does a better job of following instructions carefully, and can be told to use the coding language of choice to produce results, such as XML or JSON. GPT-4 Turbo will also support images and text-to-speech, and it still offers DALL-E 3 integration.

A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.
Rolf van Root / Unsplash

This wasn’t the only big reveal for OpenAI, which also introduced GPTs, custom versions of ChatGPT that anyone can make for their own specific purpose with no knowledge of coding. These GPTs can be made for personal or company use, but can also be distributed to others. OpenAI says that GPTs are available today for ChatGPT Plus subscribers and enterprise users.

Lastly, in light of constant copyright concerns, OpenAI joins Google and Microsoft in saying that it will take legal responsibility if its customers are sued for copyright infringement.

With the enormous context window, the new copyright shield, and an improved ability to follow instructions, GPT-4 Turbo might turn out to be both a blessing and a curse. ChatGPT is fairly good at not doing things it shouldn’t do, but even still, it has a dark side. This new version, while infinitely more capable, may also come with the same drawbacks as other LLMs, except this time, it’ll be on steroids.

Please enable Javascript to view this content

Monica J. White
Monica is a computing writer at Digital Trends, focusing on PC hardware. Since joining the team in 2021, Monica has written…
You can now view all of your ChatGPT-generated images in one place
ChatGPT library promotion video.

OpenAI did text generation and image generation separately for quite a while, but that all changed a couple of weeks ago when it added image capabilities directly into ChatGPT. Now, a small but powerful Quality of Life update gives users access to an image library where they can see all of the insane things they've created.

https://twitter.com/OpenAI/status/1912255254512722102

Read more
ChatGPT can now remember more details from your past conversations
ChatGPT on a laptop

OpenAI has just announced that ChatGPT received a major upgrade to its memory features. The chatbot will now be able to remember a lot more about you, making it easier to personalize each conversation and adapt its responses. However, the feature won't be available to everyone, and there are a few things to note about the way memory will work now.

The company showed off the new update in a post on X (Twitter), giving a brief demo of how much ChatGPT can remember now. According to OpenAI, ChatGPT can now "reference all of your past chats to provide more personalized responses." Previously, only certain things were saved in memory, but now, ChatGPT can check out every single chat to reference what it knows about you in future conversations.

Read more
OpenAI might start watermarking ChatGPT images — but only for free users
OpenAI press image

Everyone has been talking about ChatGPT's new image-generation feature lately, and it seems the excitement isn't over yet. As always, people have been poking around inside the company's apps and this time, they've found mentions of a watermark feature for generated images.

Spotted by X user Tibor Blaho, the line of code image_gen_watermark_for_free seems to suggest that the feature would only slap watermarks on images generated by free users -- giving them yet another incentive to upgrade to a paid subscription.

Read more