Skip to main content

The BigSleep A.I. is like Google Image Search for pictures that don’t exist yet

Eternity
BigSleep

In case you’re wondering, the picture above is “an intricate drawing of eternity.” But it’s not the work of a human artist; it’s the creation of BigSleep, the latest amazing example of generative artificial intelligence (A.I.) in action.

A bit like a visual version of text-generating A.I. model GPT-3, BigSleep is capable of taking any text prompt and visualizing an image to fit the words. That could be something esoteric like eternity, or it could be a bowl of cherries, or a beautiful house (the latter of which can be seen below.) Think of it like a Google Images search — only for pictures that have never previously existed.

How BigSleep works

“At a high level, BigSleep works by combining two neural networks: BigGAN and CLIP,” Ryan Murdock, BigSleep’s 23-year-old creator, a student studying cognitive neuroscience at the University of Utah, told Digital Trends.

The first of these, BigGAN, is a system created by Google that takes in random noise and outputs images. BigGAN is a generative adversarial network: A pair of dueling neural networks that carry out what Murdock calls an “adversarial tug-of-war” between an image-generating network and a discriminator network. Over time, the interaction between generator and discriminator results in improvements being made to both neural networks.

Beautiful house
A ‘beautiful house,’ according to BigSleep. I mean, it’s not wrong. BigSleep

CLIP, meanwhile, is a neural net made by OpenAI that has been taught to match images and descriptions. Give CLIP text and images, and it will attempt to figure out how well they match and give them a score accordingly.

By combining the two, Murdock explained that BigSleep searches through BigGAN’s outputs for images that maximize CLIP’s scoring. It then slowly tweaks the noise input in BigGAN’s generator until CLIP says that the images that are produced match the description. Generating an image to match a prompt takes about three minutes in total.

“BigSleep is significant because it can generate a wide variety of concepts and objects fairly well at 512 x 512 pixel resolution,” Murdock said. “Previous work has produced impressive results, but, by my knowledge, much of it has been restricted to lower-resolution images and more everyday objects.”

Image-generating A.I.

BigSleep isn’t the first time A.I. has been used to generate images. Its name is reminiscent of DeepDream, an A.I. created by Google engineer Alex Mordvintsev that creates psychedelic imagery using classification models. A GAN-based system was also used to create the A.I. painting sold at auction in 2018 for a massive $432,500. However, it’s certainly a fascinating step forward.

To try out BigSleep for yourself, Murdock suggested checking out his Google Colab notebook regarding the project. There’s a bit of a learning curve involving using the Colab GUI and a few other steps, but it’s free to take for a spin. Other ways of testing it will liekly also open up in the weeks to come. If you’re interested, you can also visit r/MediaSynthesis, where users are posting some of the best images they’ve generated with the system so far.

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Deep learning A.I. can imitate the distortion effects of iconic guitar gods
guitar_amp_in_anechoic_chamber_26-1-2020_photo_mikko_raskinen_006 1

Music making is increasingly digitized here in 2020, but some analog audio effects are still very difficult to reproduce in this way. One of those effects is the kind of screeching guitar distortion favored by rock gods everywhere. Up to now, these effects, which involve guitar amplifiers, have been next to impossible to re-create digitally.

That’s now changed thanks to the work of researchers in the department of signal processing and acoustics at Finland’s Aalto University. Using deep learning artificial intelligence (A.I.), they have created a neural network for guitar distortion modeling that, for the first time, can fool blind-test listeners into thinking it’s the genuine article. Think of it like a Turing Test, cranked all the way up to a Spınal Tap-style 11.

Read more
Barnes & Noble used A.I. to make classic books more diverse. It didn’t go well
Barnes and Noble Diverse Editions

For Black History Month, Barnes & Noble created covers of classic novels with the protagonists re-imagined as people of color. Then it quickly canceled its planned Diverse Editions of 12 books, including Emma, The Secret Garden, and Frankenstein amid criticism that it clumsily altered books by mostly white authors instead of promoting writers of color. The project used artificial intelligence to scan 100 books for descriptions of major characters, and artists created covers depicting Alices, Romeos, and Captain Ahabs of various ethnicities.

"We acknowledge the voices who have expressed concerns about the Diverse Editions project at our Barnes & Noble Fifth Avenue store and have decided to suspend the initiative," Barnes & Noble announced in a statement. The company partnered with with Penguin Random House and advertising agency TBWA/CHIAT/DAY to create the books.

Read more
Google’s A.I. can now detect breast cancer more accurately than doctors can
artificial virus kills cancer cells cell

Google’s artificial intelligence technology is able to spot signs of breast cancer in women with more accuracy than doctors, according to a new study. 

The study, published on Wednesday, January 1, in the scientific journal Nature, found that by using A.I. technology, there was a reduction in false positives and false negatives when it came to diagnosing forms of breast cancer.

Read more