Skip to main content

A.I. creates some of the most realistic computer-generated images of people yet

Progressive Growing of GANs for Improved Quality, Stability, and Variation
Sure, artificial intelligence apps can turn your photos into paintings, but now computers can generate their own photographs — of people (and even things) that don’t actually exist. Nvidia recently created a generative adversarial network (GAN) that can generate high-resolution images from nothing but a training set database. The company shared a research paper detailing the computer-generated images on Friday, October 27.

Nvidia’s proposed method relies on a generative adversarial network, or GAN. It consists of two neural networks that are based on algorithms used in unsupervised machine learning, which in itself pushes artificial intelligence to “learn” through trial and error without human intervention, such as separating images of cats and dogs into two groups.

In this case, one neural network is called the “generator” while the second is the “discriminator.” The generator network creates an image that, to humans, is indistinguishable from the training sample. The discriminator network will then compare the render to the sample, and provide feedback. Ultimately, the generator network will get better at rendering, and the discriminator network will get better at scrutinizing. The final goal is to re-create the render until it “fools” the discriminator network.

Nvidia wanted to expand earlier image-generation attempts including efforts by Google, by both creating higher-quality images and generating a wider variety of computer-generated images in less time. To do that, the researchers created a progressive system. Since A.I. learns more when data is fed into the system, the group added more difficult renderings as the system progressively improved.

The program started with generating low-resolution images of people that don’t actually exist, inspired by all the photos in the database, which are all images of celebrities. As the system improved, the researchers added more layers to the program, adding more fine detail into low-resolution images became 1080p HD standard photos. The result is high-resolution, detailed images of “celebrities” that don’t actually exist in real life.

Along with creating computer-generated images with more resolution — and more impressive detail — the group worked to increase the variation of generated graphics, setting new records for earlier projects for unsupervised algorithms. The research also included new ways of making sure those two generator-discriminator algorithms don’t decide to engage in any “unhealthy competition.” The group also improved the original dataset of celebrity images that it started out with.

Along with generating images of celebrities, the group also used to algorithms on datasets of images of objects, such as a couch, a horse, and a bus.

“While the quality of our results is generally high compared to earlier work on GANs, and the training is stable in large resolutions, there is a long way to true photorealism,” the paper concludes. “Semantic sensibility and understanding dataset-dependent constraints, such as certain objects being straight rather than curved, leaves a lot to be desired.”

While there are still some shortcomings, the group said that photorealism with computer-generated images “may be within reach,” particularly in generating images of fake celebrities.

Editors' Recommendations

Hillary K. Grigonis
Hillary never planned on becoming a photographer—and then she was handed a camera at her first writing job and she's been…
Groundbreaking A.I. brain implant translates thoughts into spoken words

Researchers from the University of California, San Francisco, have developed a brain implant which uses deep-learning artificial intelligence to transform thoughts into complete sentences. The technology could one day be used to help restore speech in patients who are unable to speak due to paralysis.

“The algorithm is a special kind of artificial neural network, inspired by work in machine translation,” Joseph Makin, one of the researchers involved in the project, told Digital Trends. “Their problem, like ours, is to transform a sequence of arbitrary length into a sequence of arbitrary length.”

Read more
Smart A.I. bodysuits could reveal when babies are developing mobility problems
Baby smart bodysuit

In sci-fi shows like Star Trek, people wear jumpsuits because, well, it’s the future. In real life, babies could soon wear special high-tech jumpsuits designed to help doctors monitor their movements and look for any possible mobility issues that are developing.

The smart jumpsuit in question has been developed by medical and A.I. researchers in Finland’s Helsinki Children’s Hospital. In a recent demonstration, they fitted 22 babies, some as young as four months, with jumpsuits equipped with motion sensors. These enabled the suits to register the acceleration and positional data of wearers and relay it to a nearby smartphone. A neural network was then trained to recognize posture and movement by comparing data from the suits with video shot by the researchers.

Read more
Revisiting the rise of A.I.: How far has artificial intelligence come since 2010?
christie's auction house obvious art ai

2010 doesn’t seem all that long ago. Facebook was already a giant, time-consuming leviathan; smartphones and the iPad were a daily part of people’s lives; The Walking Dead was a big hit on televisions across America; and the most talked-about popular musical artists were the likes of Taylor Swift and Justin Bieber. So pretty much like life as we enter 2020, then? Perhaps in some ways.

One place that things most definitely have moved on in leaps and bounds, however, is on the artificial intelligence front. Over the past decade, A.I. has made some huge advances, both technically and in the public consciousness, that mark this out as one of the most important ten year stretches in the field’s history. What have been the biggest advances? Funny you should ask; I’ve just written a list on exactly that topic.

Read more