Skip to main content

ChatGPT can laugh now, and it’s downright creepy

OpenAI's Mira Murati introduces GPT-4o.
OpenAI

We all saw it coming, and the day is finally here — ChatGPT is slowly morphing into your friendly neighborhood AI, complete with the ability to creepily laugh alongside you if you say something funny, or go “aww” if you’re being nice — and that’s just scratching at the surface of today’s announcements. OpenAI just held a special Spring Update Event, during which it unveiled its latest large language model (LLM) — GPT-4o. With this update, ChatGPT gets a desktop app, will be better and faster, but most of all, it becomes fully multimodal.

Recommended Videos

The event started with an introduction by Mira Murati, OpenAI’s CTO, who revealed that today’s updates aren’t going to be just for the paid users — GPT-4o is launching across the platform for both free users and paid subscribers. “The special thing about GPT-4o is that it brings GPT-4 level intelligence to everyone, including our free users,” Murati said.

GPT-4o is said to be much faster, but the impressive part is that it really takes the capabilities up a few notches, across text, vision, and audio. It can also be used by developers to integrate into their APIs, and it’s said to be up to two times faster and 50% cheaper, with a rate limit that’s five times higher compared to GPT-4 Turbo.

OpenAI announces GPT-4o.
OpenAI

Alongside the new model, OpenAI is launching the ChatGPT desktop app as well as a refresh of the user interface on the website. The goal is to make the chatbot as easy to communicate with as possible. “We’re looking at the future of interaction between ourselves and the machines, and we think that GPT-4o is really shifting that paradigm into the future of collaboration — where the interaction becomes much more natural,” Murati said.

To that end, the new improvements — which Murati showcased with the help of OpenAI’s Mark Chen and Barret Zoph — really do appear to make the interaction much more seamless. GPT-4o is now able to analyze videos, images, and speech in real time, and it can accurately pinpoint emotion in all three. This is especially impressive in ChatGPT Voice, which became so human-like that it skirts the edge of the uncanny valley.

Saying “hi” to ChatGPT evokes an enthusiastic, friendly response that has just the slightest hint of a robotic undertone. When Mark Chen told the AI that he was holding a live demo and needed help calming down, it sounded adequately impressed and jumped in with the idea that he should take a few deep breaths. It also noticed when those breaths were far too quick — more like panting, really — and walked Chen through the right way to breathe, first making a small joke: “You’re not a vacuum cleaner.”

Introducing GPT-4o

The conversation flows naturally, as you can now interrupt ChatGPT and don’t have to wait for it to finish, and the responses come quickly with no awkward pauses. When asked to tell a bedtime story, it responded to requests regarding its tone of voice, going from enthusiastic, to dramatic, to robotic. The second half of the demo showed off ChatGPT’s ability to accurately read code, help with math problems via video, and read and describe the content of the screen.

The demo wasn’t perfect — the bot appeared to cut off at times, and it was hard to tell whether this was due to someone else talking or because of latency. However, it sounded just about as lifelike as can be expected from a chatbot, and its ability to read human emotion and respond in kind is equal parts thrilling and anxiety-inducing. Hearing ChatGPT laugh wasn’t on my list of things I thought I’d hear this week, but here we are.

GPT-4o, with its multimodal design, as well as the desktop app, will be gradually launching over the next few weeks. A few months ago, Bing Chat told us that it wanted to be human, but now, we’re about to get a version of ChatGPT that might be as close to human as we’ve ever seen since the start of the AI boom.

Please enable Javascript to view this content

Monica J. White
Monica is a computing writer at Digital Trends, focusing on PC hardware. Since joining the team in 2021, Monica has written…
ChatGPT can now remember more details from your past conversations
ChatGPT on a laptop

OpenAI has just announced that ChatGPT received a major upgrade to its memory features. The chatbot will now be able to remember a lot more about you, making it easier to personalize each conversation and adapt its responses. However, the feature won't be available to everyone, and there are a few things to note about the way memory will work now.

The company showed off the new update in a post on X (Twitter), giving a brief demo of how much ChatGPT can remember now. According to OpenAI, ChatGPT can now "reference all of your past chats to provide more personalized responses." Previously, only certain things were saved in memory, but now, ChatGPT can check out every single chat to reference what it knows about you in future conversations.

Read more
OpenAI might start watermarking ChatGPT images — but only for free users
OpenAI press image

Everyone has been talking about ChatGPT's new image-generation feature lately, and it seems the excitement isn't over yet. As always, people have been poking around inside the company's apps and this time, they've found mentions of a watermark feature for generated images.

Spotted by X user Tibor Blaho, the line of code image_gen_watermark_for_free seems to suggest that the feature would only slap watermarks on images generated by free users -- giving them yet another incentive to upgrade to a paid subscription.

Read more
OpenAI adjusts AI roadmap for better GPT-5
OpenAI press image

OpenAI is reconfiguring its rollout plan for upcoming AI models. The company’s CEO, Sam Altman shared on social media on Friday that it will delay the launch of its GPT-5 large language model (LLM) in favor of some lighter reasoning models to release first.

The brand will now launch new o3 and o4-mini reasoning models in the coming weeks as an alternative to the GPT-5 launch fans were expecting. In this time, OpenAI will be smoothing out some issues in developing the LLM before a final rollout. The company hasn’t detailed a specific timeline, just indicating that GPT-5 should be available in the coming months.

Read more