Skip to main content

Google Inceptionism may be cooler than the real thing

google inceptionism breaks down artificial neural networks screen shot 2015 06 23 at 3 41 36 pm
Michael Tyka
It may not be quite the same thing as planting an idea in a dreaming mind, but it stands to argue that this form of inception is even cooler. In a fascinating leap forward in the realm of artificial intelligence, the Google research lab has effectively “trained” artificial neural networks by showing them millions of images whose features are recognized by layers of artificial neurons. Each layer recognizes an additional aspect of the image until the final output is reached. Taken all at once, the process allows for an artificially intelligent system to recognize a picture, but Google wanted to know at each individual stage. And that’s where things got cool.

When Google researchers decided to partition out the recognition process, allowing just one aspect of the entire analysis to enhance a certain image, they created some particularly groovy pictures. Calling it inceptionism, Google’s Alexander Mordvintsev explained, “Instead of exactly prescribing which feature we want the network to amplify, we can also let the network make that decision. In this case we simply feed the network an arbitrary image or photo and let the network analyze the picture. We then pick a layer and ask the network to enhance whatever it detected. Each layer of the network deals with features at a different level of abstraction, so the complexity of features we generate depends on which layer we choose to enhance. For example, lower layers tend to produce strokes or simple ornament-like patterns, because those layers are sensitive to basic features such as edges and their orientations.”

Essentially, this pinpointing of one particular recognition layer magnified whatever an image somewhat resembled. Wrote Mordvintsev, “We ask the network: ‘Whatever you see there, I want more of it!’ This creates a feedback loop: if a cloud looks a little bit like a bird, the network will make it look more like a bird. This in turn will make the network recognize the bird even more strongly on the next pass and so forth, until a highly detailed bird appears, seemingly out of nowhere.”

Beyond creating incredibly trippy images, Google believes that the implications they’ve unlocked with this new, deconstructed process are limitless. Concluded the research team, “The techniques presented here help us understand and visualize how neural networks are able to carry out difficult classification tasks, improve network architecture, and check what the network has learned during training. It also makes us wonder whether neural networks could become a tool for artists — a new way to remix visual concepts — or perhaps even shed a little light on the roots of the creative process in general.”

Editors' Recommendations

Lulu Chang
Former Digital Trends Contributor
Fascinated by the effects of technology on human interaction, Lulu believes that if her parents can use your new app…
Google may have accidentally shown off the Pixel 6’s in-display fingerprint sensor
Google Pixel 6 colors.

Google may have already shared a lot about the Pixel 6 and Pixel 6 Pro, but there are still quite a few unanswered questions the company will address at its proper launch later in the year. However, an accidental post from Google's Android chief Hiroshi Lockheimer may just have revealed the position of the phone's in-display fingerprint sensor -- and it marks a change from rear-mounted fingerprint sensors (and the short-lived face unlock of the Pixel 4).

In an image shared on Twitter, the senior vice president posted a screenshot of the lock screen of an Android 12 phone with an in-display fingerprint sensor in a bid to show off the Material You interface. Eagle-eyed users quickly noticed that the elements on display matched what the Pixel 6 would be expected to show. The folks over at 9to5Google note that this could be a coincidence. Phones like the Xiaomi Mi 11 Ultra andOnePlus 9 Pro can be used on Android 12 at the moment, and they have in-display fingerprint sensors, albeit with differing positioning. However, the fact that the image was deleted rather quickly does make it more likely to be a Pixel 6. Google did also accidentally reveal the Pixel 5a's camera in a similar manner earlier in the year.

Read more
Can A.I. beat human engineers at designing microchips? Google thinks so
google artificial intelligence designs microchips photo 1494083306499 e22e4a457632

Could artificial intelligence be better at designing chips than human experts? A group of researchers from Google's Brain Team attempted to answer this question and came back with interesting findings. It turns out that a well-trained A.I. is capable of designing computer microchips -- and with great results. So great, in fact, that Google's next generation of A.I. computer systems will include microchips created with the help of this experiment.

Azalia Mirhoseini, one of the computer scientists of Google Research's Brain Team, explained the approach in an issue of Nature together with several colleagues. Artificial intelligence usually has an easy time beating a human mind when it comes to games such as chess. Some might say that A.I. can't think like a human, but in the case of microchips, this proved to be the key to finding some out-of-the-box solutions.

Read more
Google’s LaMDA is a smart language A.I. for better understanding conversation
LaMDA model

Artificial intelligence has made extraordinary advances when it comes to understanding words and even being able to translate them into other languages. Google has helped pave the way here with amazing tools like Google Translate and, recently, with its development of Transformer machine learning models. But language is tricky -- and there’s still plenty more work to be done to build A.I. that truly understands us.
Language Model for Dialogue Applications
At Tuesday’s Google I/O, the search giant announced a significant advance in this area with a new language model it calls LaMDA. Short for Language Model for Dialogue Applications, it’s a sophisticated A.I. language tool that Google claims is superior when it comes to understanding context in conversation. As Google CEO Sundar Pichai noted, this might be intelligently parsing an exchange like “What’s the weather today?” “It’s starting to feel like summer. I might eat lunch outside.” That makes perfect sense as a human dialogue, but would befuddle many A.I. systems looking for more literal answers.

LaMDA has superior knowledge of learned concepts which it’s able to synthesize from its training data. Pichai noted that responses never follow the same path twice, so conversations feel less scripted and more responsively natural.

Read more