Skip to main content

Digital Trends may earn a commission when you buy through links on our site. Why trust us?

OpenAI’s GPT-3 algorithm is here, and it’s freakishly good at sounding human

When the text-generating algorithm GPT-2 was created in 2019, it was labeled as one of the most “dangerous” A.I. algorithms in history. In fact, some argued that it was so dangerous that it should never be released to the public (spoiler: It was) lest it ushers in the “robot apocalypse.” That, of course, never happened. GPT-2 was eventually released to the public, and after it didn’t destroy the world, its creators moved on to the next thing. But how do you follow up the most dangerous algorithm ever created?

The answer, at least on paper, is simple: Just like the sequel to any successful movie, you make something that’s bigger, badder, and more expensive. Only one xenomorph in the first Alien? Include a whole nest of them in the sequel, Aliens. Just a single nigh-indestructible machine sent back from the future in Terminator? Give audiences two of them to grapple with in Terminator 2: Judgment Day.

OpenAI

The same is true for A.I. — in this case, GPT-3, a recently released natural language processing neural network created by OpenAI, the artificial intelligence research lab that was once (but no longer) sponsored by SpaceX and Tesla CEO Elon Musk.

GPT-3 is the latest in a series of text-generating neural networks. The name GPT stands for Generative Pretrained Transformer, referencing a 2017 Google innovation called a Transformer which can figure out the likelihood that a particular word will appear with surrounding words. Fed with a few sentences, such as the beginning of a news story, the GPT pre-trained language model can generate convincingly accurate continuations, even including the formulation of fabricated quotes.

This is why some worried that it could prove itself to be dangerous, by helping to generate false text that, like deepfakes, could help spread fake news online. Now, with GPT-3 it’s bigger and smarter than ever.

Tale of the tape

GPT-3 is, as a boxing-style “tale of the tape” comparison would make clear, a real heavyweight bruiser of a contender. OpenAI’s original 2018 GPT had 110 million parameters, referring to the weights of the connections which enable a neural network to learn. 2019’s GPT-2, which caused much of the previous uproar about its potential malicious applications, possessed 1.5 billion parameters. Last month, Microsoft introduced what was then the world’s biggest similar pre-trained language model, boasting 17 billion parameters. 2020’s monstrous GPT-3, by comparison, has an astonishing 175 billion parameters. It reportedly cost around $12 million to train.

“The power of these models is that in order to successfully predict the next word they end up learning really powerful world models that can be used for all kinds of interesting things,” Nick Walton, chief technology officer of Latitude, the studio behind A.I. Dungeon, an A.I.-generated text adventure game powered by GPT-2, told Digital Trends. “You can also fine-tune the base models to shape the generation in a specific direction while still maintaining the knowledge the model learned in pre-training.”

The computational resources needed to actually use GPT-3 in the real world make it extremely impractical.

Gwern Branwen, a commentator and researcher who writes about psychology, statistics, and technology, told Digital Trends that the pre-trained language model GPT represents has become an “increasingly a critical part of any machine learning task touching on text. In the same way that [the standard suggestion for] many image-related tasks have become ‘use a [convolutional neural network], many language-related tasks have become ‘use a fine-tuned [language model.’”

OpenAI — which declined to comment for this article — is not the only company doing some impressive work with natural language processing. As mentioned, Microsoft has stepped up to the plate with some dazzling work of its own. Facebook, meanwhile, is heavily investing in the technology and has created breakthroughs like BlenderBot, the largest ever open-sourced, open-domain chatbot. It outperforms others in terms of engagement and also feels more human, according to human evaluators. As anyone who has used a computer in the past few years will know, machines are getting better at understanding us than ever — and natural language processing is the reason why.

Size matters

But OpenAI’s GPT-3 still stands alone in its sheer record-breaking scale.“GPT-3 is generating buzz primarily because of its size,” Joe Davison, a research engineer at Hugging Face, a startup working on the advancement of natural language processing by developing open-source tools and carrying out fundamental research, told Digital Trends.

The big question is what all of this will be used for. GPT-2 found its way into a myriad of uses, being employed for various text-generating systems.

Davison expressed some caution that GPT-3 could be limited by its size. “The team at OpenAI have unquestionably pushed the frontier of how large these models can be and showed that growing them reduces our dependence on task-specific data down the line,” he said. “However, the computational resources needed to actually use GPT-3 in the real world make it extremely impractical. So while the work is certainly interesting and insightful, I wouldn’t call it a major step forward for the field.”

GPT-2 AI Text Generator
OpenAI

Others disagree, though. “The [internal-link post_id="2443861"]artificial intelligence[/internal-link] community has long observed that combining ever-larger models with more and more data yields almost predictable improvements in the power of these models, very much like Moore’s Law of scaling compute power,” Yannic Kilcher, an A.I. researcher who runs a YouTube channel, told Digital Trends. “Yet, also like Moore’s Law, many have speculated that we are at the end of being able to improve language models by simply scaling them up, and in order to get higher performance, we would need to make substantial inventions in terms of new architectures or training methods. GPT-3 shows that this is not true and the ability to push performance simply through scale seems unbroken — and there’s not really an end in sight.”

Passing the Turing Test?

Branwen suggests that tools like GPT-3 could be a major disruptive force. “One way to think of it is, what jobs involve taking a piece of text, transforming it, and emitting another piece of text?” Branwen said. “Any job which is described by that — such as medical coding, billing, receptionists, customer support, [and more] would be a good target for fine-tuning GPT-3 on, and replacing that person. A great many jobs are more or less ‘copying fields from one spreadsheet or PDF to another spreadsheet or PDF’, and that sort of office automation, which is too chaotic to easily write a normal program to replace, would be vulnerable to GPT-3 because it can learn all of the exceptions and different conventions and perform as well as the human would.”

Ultimately, natural language processing may be just one part of A.I., but it arguably cuts to the core of the artificial intelligence dream in a way that few other disciplines in the field do. The famous Turing Test, one of the seminal debates that kick-started the field, is a natural language processing problem: Can you build an A.I. that can convincingly pass itself off as a person? OpenAI’s latest work certainly advances this goal. Now what remains is to be seen what applications researchers will find for it.

“I think it is the fact that GPT-2 text could so easily pass for human that it is getting difficult to hand-wave it away as ‘just pattern recognition’ or ‘just memorization,’” Branwen said. “Anyone who was sure that the things that deep learning does is nothing like intelligence has to have had their faith shaken to see how far it has come.”

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
OpenAI’s new tool can spot fake AI images, but there’s a catch
OpenAI Dall-E 3 alpha test version image.

Images generated by artificial intelligence (AI) have been causing plenty of consternation in recent months, with people understandably worried that they could be used to spread misinformation and deceive the public. Now, ChatGPT maker OpenAI is apparently working on a tool that can detect AI-generated images with 99% accuracy.

According to Bloomberg, OpenAI’s tool is designed to root out user-made pictures created by its own Dall-E 3 image generator. Speaking at the Wall Street Journal’s Tech Live event, Mira Murati, chief technology officer at OpenAI, claimed the tool is “99% reliable.” While the tech is being tested internally, there’s no release date yet.

Read more
This powerful ChatGPT feature is back from the dead — with a few key changes
A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.

ChatGPT has just regained the ability to browse the internet to help you find information. That should (hopefully) help you get more accurate, up-to-date data right when you need it, rather than solely relying on the artificial intelligence (AI) chatbot’s rather outdated training data.

As well as giving straight-up answers to your questions based on info found online, ChatGPT developer OpenAI revealed that the tool will provide a link to its sources so you can check the facts yourself. If it turns out that ChatGPT was wrong or misleading, well, that’s just another one for the chatbot’s long list of missteps.

Read more
Most people distrust AI and want regulation, says new survey
A person's hand holding a smartphone. The smartphone is showing the website for the ChatGPT generative AI.

Most American adults do not trust artificial intelligence (AI) tools like ChatGPT and worry about their potential misuse, a new survey has found. It suggests that the frequent scandals surrounding AI-created malware and disinformation are taking their toll and that the public might be increasingly receptive to ideas of AI regulation.

The survey from the MITRE Corporation and the Harris Poll claims that just 39% of 2,063 U.S. adults polled believe that today’s AI tech is “safe and secure,” a drop of 9% from when the two firms conducted their last survey in November 2022.

Read more