Skip to main content

The future of A.I.: 4 big things to watch for in the next few years

brain with computer text scrolling artificial intelligence
Chris DeGraw/Digital Trends, Getty Images

A.I. isn’t going to put humanity on the scrap heap any time soon. Nor are we one Google DeepMind publication away from superintelligence. But make no mistake about it: Artificial intelligence is making enormous strides.

As noted in the Artificial Intelligence Index Report 2021, last year the number of journal publications in the field grew by 34.5%. That’s a much higher percentage than the 19.6% seen one year earlier. A.I. is going to transform everything from medicine to transportation, and there are few who would argue otherwise.

Related Videos

Here in 2021, we’re well into the deep learning revolution, which supercharged A.I. in the twenty-first century. But “deep learning” is a broad term that, by now, most people are very familiar with. Where are the big advances coming in A.I.? Where should you be looking to see the future unfolding in front of you? Here are some of the technologies to keep an eye on.

Transformers: More than meets the eye

“Robots in disguise // Autobots wage their battle // To destroy the evil forces // Of the Decepticons.” Wait, that’s something else!

In fact, far from a franchise that enjoyed its heyday last century, Transformers — the A.I. model — represent one of the field’s most promising present-day advances, particularly in the field of natural language processing research.

Language understanding has been a key interest in A.I. since before it was even called A.I., dating back all the way to Alan Turing’s proposed test for machine intelligence. Transformer models, first described by Google researchers in 2017, have been shown to be vastly superior to previous language models. One reason is the almost unfathomably large datasets they can be trained on. They can be used for machine translation, summarizing documents, answering questions, understanding the content of video, and much, much more. While large language models certainly pose problems, their success is not to be denied.

transformer (machine learning model)

The advent of Transformers led to the development of GPT-3 (Generative Pre-trained Transformer), which boasts 175 billion parameters, was trained on 45 TB of text data, and cost upward of $12 million to build. At the start of this year, Google took back its crown by debuting a new language model with some 1.6 trillion parameters, making it nine times the size of GPT-3. The Transformer revolution is just beginning.

Generative adversarial networks

Conflict doesn’t usually make the world a better place. But it certainly makes A.I. better.

Over the past several years, there have been considerable advances in image generation: referring to the use of A.I. to dream up pictures that look indistinguishable from actual pictures from the real world. This isn’t just about social media-fueled conspiracy theories fooling people into thinking that President Biden has been caught partying with the Illuminati, either. Image generation can be used for everything from improving search capabilities to helping designers create variations on a theme to generating artwork that sells for millions at auction.

So where does the conflict come into play? One of the principal technologies for image generation is called a generative adversarial network (GAN). This class of machine learning framework uses a combative, tug-of-war approach to pass images and feedback between a “generator” and a “discriminator” algorithm, resulting in incremental improvements until the discriminator is unable to tell what’s real and fake. GANs have also been used for generating fake genetic code that could be used by researchers.

Look for plenty more innovative applications in the near future.

Neuro-symbolic A.I.

Neurosymbolic AI Explained

In a December 2020 publication, researchers Artur d’Avila Garcez and Luis Lamb described neuro-symbolic A.I. as the “third wave” of artificial intelligence. Neuro-symbolic A.I. is not, strictly speaking, totally new. It’s more like getting two of the world’s greatest rock stars, who once battled at the top of the charts, together to create a supergroup. In this case, the supergroup consists of self-learning neural networks and rule-based symbolic A.I.

“Neural networks and symbolic ideas are really wonderfully complementary to each other,” David Cox, director of the MIT-IBM Watson A.I. Lab in Cambridge, Massachusetts, previously told Digital Trends. “Because neural networks give you the answers for getting from the messiness of the real world to a symbolic representation of the world, finding all the correlations within images. Once you’ve got that symbolic representation, you can do some pretty magical things in terms of reasoning.”

The results could give us A.I. that is better at carrying out this reasoning process, as well as more explainable A.I. that can, well, explain why it made the decision that it did. Look for this to be a promising avenue of A.I. research in the years to come.

Machine learning meets molecular synthesis

Along with GPT-3, last year’s most significant A.I. advance was DeepMind’s astonishing AlphaFold, which applied deep learning to the decades-old biology challenge of protein folding. An answer to this problem will lead to the curing of diseases, new drug discovery, a greater understanding of life on a cellular level, and more. This last entry on the list is less a specific example of A.I. technology and more of an example of how A.I. is making a big difference in one domain.

Machine learning techniques in this area are proving transformative for healthcare and biology in fields like molecular synthesis, whereby ML can help scientists work out which potential drugs they should be evaluating and then how to most effectively synthesize them in the lab. There is, perhaps, no area more life-changing where A.I. is going to be used over the next decade and beyond.

Editors' Recommendations

Emotion-sensing A.I. is here, and it could be in your next job interview
man speaking into phone

I vividly remember witnessing speech recognition technology in action for the first time. It was in the mid-1990s on a Macintosh computer in my grade school classroom. The science fiction writer Arthur C. Clarke once wrote that “any sufficiently advanced technology is indistinguishable from magic” -- and this was magical all right, seeing spoken words appearing on the screen without anyone having to physically hammer them out on a keyboard.

Jump forward another couple of decades, and now a large (and rapidly growing) number of our devices feature A.I. assistants like Apple’s Siri or Amazon’s Alexa. These tools, built using the latest artificial intelligence technology, aren’t simply able to transcribe words -- they are able to make sense of their contents to carry out actions.

Read more
Language supermodel: How GPT-3 is quietly ushering in the A.I. revolution
profile of head on computer chip artificial intelligence

OpenAI’s GPT-2 text-generating algorithm was once considered too dangerous to release. Then it got released -- and the world kept on turning.

In retrospect, the comparatively small GPT-2 language model (a puny 1.5 billion parameters) looks paltry next to its sequel, GPT-3, which boasts a massive 175 billion parameters, was trained on 45 TB of text data, and cost a reported $12 million (at least) to build.

Read more
Image-recognition A.I. has a big weakness. This could be the solution
Image Recognition Turtle Recognized as a Rifle

You’re probably familiar with deepfakes, the digitally altered “synthetic media” that’s capable of fooling people into seeing or hearing things that never actually happened. Adversarial examples are like deepfakes for image-recognition A.I. systems -- and while they don’t look even slightly strange to us, they’re capable of befuddling the heck out of machines.

Several years ago, researchers at the Massachusetts Institute of Technology’s Computer Science and Artificial Intelligence Laboratory (CSAIL) found that they could fool even sophisticated image recognition algorithms into confusing objects simply by slightly altering their surface texture. These weren’t minor mix-ups, either.

Read more