After selling the first artificial intelligence generated artwork earlier this year, deep learning algorithms are now tackling portraits — of made-up people that don’t exist. In a research paper, Nvidia recently shared the results of a generative adversarial network (GAN) trained to generate images of people while unsupervised.
The concept is based in part on style transfer technology, which transfers the style of one image onto another in order to do anything from building a photo app like Prisma to creating realistic deep fakes. Nvidia researchers redesigned the GAN to use a style transfer, while adjusting that style on each layer of code. The change, the researchers say, allows the software to generate high-level attributes like subtle differences in poses as well as random variation in features.
The researchers mixed the adjusted algorithms with a wider training dataset, one of which is the faces on Flickr instead of a database of celebrity faces, and adjusted the way the generator embeds the code for more realistic results that aren’t simply re-creating an existing person in a new photo. The researchers also switched between two different randomized codes for more variation in features.
While the GAN can randomly create images of people, the researchers also managed to control the results by using a style transfer to mix qualities from two different portraits. A “style” image mixed with a “source” image uses features like skin, eye, and hair color from the style image and applies it to the source image, which keeps the original’s gender, age and pose. The lighting styles and background of the style image also transfer to the source image.
The results show dramatic improvement over similar Nvidia research from four years ago, where the resulting images were black and white images with few details. The program’s not perfect — the system, for example, created some portraits with two different colored eyes and in others didn’t maintain facial symmetry.
While the paper shows impressive progress, the researchers didn’t detail the real-world use of such algorithms. Like the technology used for deep fakes, the ability to create an image of a fake person — though not a process that’s easy to replicate — raises concerns about misuse of the technology.
- Read the eerily beautiful ‘synthetic scripture’ of an A.I. that thinks it’s God
- The future of A.I.: 4 big things to watch for in the next few years
- The best folding phones for 2021: What’s available now and what’s coming up
- Algorithmic architecture: Should we let A.I. design buildings for us?
- Language supermodel: How GPT-3 is quietly ushering in the A.I. revolution