Skip to main content

GPT-5 to take AI forward in these two important ways

Breaking Down Barriers to AI Innovation with Reid Hoffman & Kevin Scott

We could soon see generative AI systems capable of passing Ph.D. exams thanks to more “durable” memory and more robust reasoning operations, Microsoft CTO Kevin Scott revealed when he took to the stage with Reid Hoffman during a Berggruen Salon in Los Angeles earlier this week.

Recommended Videos

“It’s sort of weird right now that you have these interactions with agents and the memory is entirely episodic,” he lamented. “You have a transaction, you do a thing. It’s useful or not for whatever task you were doing, and then it forgets all about it.” The AI system isn’t learning from or even remembering previous interactions with the user, he continued. “There’s no way for you to refer back to a thing you were trying to get [the AI] to solve in the past.”

However, Scott is optimistic that,”we’re seeing technically all of the things fall in place to have really durable memories with the systems.” With more persistent memory, future AI systems will be able to respond more naturally and more accurately over the span of multiple conversations rather than being limited to the current session.

OpenAI announced in February that it was beginning to test a new persistent memory system, rolling it out to select free and Plus subscription users. Enabling the feature allows the AI to recall user tone, voice, and format preferences between conversations as well as make suggestions in new projects based on details the user mentioned in previous chats.

Scott was also buoyant about improving the “fragility” found in the reasoning of many AI systems today. “It can’t solve very complicated math problems,” he explained. “It has to bail out to other systems to do very complicated things.”

“Reasoning, I think, gets a lot better,” he continued. He compares GPT-4 and the current generation of models to high schoolers passing their AP exams. However, the next generation of AIs “could be the thing that could pass your qualified exam.”

To date, generative AI systems have outperformed their flesh-and-blood counterparts on a variety of exam and task formats. Last November, for example, GPT-4 passed the Multistate Professional Responsibility Exam (MPRE), better known as the bar exam, with 76% correct — that’s six points higher than the nation average for humans.

Scott was quick to point out, however, that training generative AIs to pass Ph.D. exams “probably sounds like a bigger deal than it actually is… the real test will be what we choose to do with it.”

Scott was especially excited to see the barriers to entry falling away so quickly. He noted that when he got into machine learning two decades ago, his work required graduate-level knowledge, stacks upon stacks of “very daunting, complicated, technical papers to figure out how to do what I wanted to do,” and around six months of coding. That same task today, he said, “a high school student could do in a Saturday morning.”

These lowered barriers to entry will likely accelerate the democratization of AI, Scott concluded. Finding solutions to the myriad social, environmental, and technological crises facing humanity are not — and cannot — be the sole responsibility of “just the people at tech companies in Silicon Valley or just people who graduated with Ph.D.s from top-five universities,” he said. “We have 8 billion people in the world who also have some idea about what it is that they want to do with powerful tools, if they just have access to them.”

Andrew Tarantola
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
Google expands AI Overviews to over 100 more countries
AI Overviews being shown in Google Search.

Google's AI Overview is coming to a search results page near you, whether you want it to or not. The company announced on Monday that it is expanding the AI feature to more than 100 countries around the world.

Google debuted AI Overview, which uses generative AI to summarize the key points of your search topic and display that information at the top of the results page, to mixed reviews in May before subsequently expanding the program in August. Monday's roll-out sees the feature made available in seven languages — English, Hindi, Indonesian, Japanese, Korean, Portuguese, and Spanish — to users in more than 100 nations (you can find a full list of covered countries here)

Read more
GPT-5: everything we know so far about OpenAI’s next frontier model
A MacBook Pro on a desk with ChatGPT's website showing on its display.

There's perhaps no product more hotly anticipated in tech right now than GPT-5. Rumors about it have been circulating ever since the release of GPT-4, OpenAI's groundbreaking foundational model that's been the basis of everything the company has launched over the past year, such as GPT-4o, Advanced Voice Mode, and the OpenAI o1-preview.

Those are all interesting in their own right, but a true successor to GPT-4 is still yet to come. Now that it's been over a year a half since GPT-4's release, buzz around a next-gen model has never been stronger.
When will GPT-5 be released?
OpenAI has continued a rapid rate of progress on its LLMs. GPT-4 debuted on March 14, 2023, which came just four months after GPT-3.5 launched alongside ChatGPT. OpenAI has yet to set a specific release date for GPT-5, though rumors have circulated online that the new model could arrive as soon as late 2024.

Read more
OpenAI could release its next-generation model by December
ChatGPT giving a response about its knowledge cutoff.

OpenAI plans to release its next-generation frontier model, code-named Orion and rumored to actually be GPT-5, by December, according to an exclusive report from The Verge. However, OpenAI boss Sam Altman is already pushing back.

According to "sources familiar with the plan," Orion will not initially be released to the general public, as the previous GPT-4 variants were. Instead, the company intends to hand the new model over to select businesses and partners, who will then use it as a platform to build their own products and services. This is the same strategy that Nvidia is pursuing with its NVLM 1.0 family of large language models (LLMs).

Read more