Skip to main content

Photorealistic A.I. tool can fill in gaps in images, including faces

Image used with permission by copyright holder

You only need to go check out the latest Hollywood blockbuster or pick up a new AAA game title to be reminded that computer graphics can be used to create some dazzling otherworldly images when called for. But some of the most impressive examples of machine-generated images aren’t necessarily alien landscapes or giant monsters, they’re image modifications that we don’t even notice.

That’s the case with a new A.I. demonstration created by computer scientists in China. A collaboration between Sun Yat-sen University in Guangzhou and Beijing’s Microsoft Research lab, they’ve developed a smart artificial intelligence which can be used to accurately fill in blank areas in an image: Whether that’s a missing face or the front of a building.

Called inpainting, the technique uses deep learning technology to fill these spaces either by copying image patches on the remainder of the picture, or by generating new areas that look convincingly accurate. The tool, which is referred to by its creators as PEN-Net (Pyramid-context ENcoder Network), does this image restoration by “encoding contextual semantics from full-resolution input and decoding the learned semantic features back into images.” The resulting Attention Transfer Network (ATN) images are not only impressively realistic, but the tool is also very quick to learn.

“[In this work, we proposed] a deep generative model for high-quality image inpainting tasks,” Yanhong Zeng, a lead author on the project, who is associated with both Sun Yat-sen University’s School of Data and Computer Science and Key Laboratory of Machine Intelligence and Advanced Computing, told Digital Trends. “Our model fills missing regions from deep to shallow at all levels, based on a cross-layer attention mechanism, so that both structure and texture coherence can be ensured in inpainting results. We are excited to see that our model is capable of generating clearer textures and more reasonable structures than previous works.”

As Zeng notes, this isn’t the first time researchers have developed tools to carry out inpainting. However, the team’s PEN-Net system demonstrates impressive results next to classical method PatchMatch and even other state-of-the-art approaches.

“Image inpainting has a wide range of applications in our daily life,” Zeng continued. “We are now planning to apply our technology in image editing — especially for object removal [and] old photo restoration.”

A paper describing the work, titled “Learning Pyramid-Context Encoder Network for High-Quality Image Inpainting,” is available to read on preprint paper repository Arxiv.

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Here’s what a trend-analyzing A.I. thinks will be the next big thing in tech
brain network on veins illustration

Virtual and augmented reality. 3D printing. Natural language processing. Deep learning. The smart home. Driverless vehicles. Biometric technology. Genetically modified organisms. Brain-computer interfaces.

These, in descending order, are the top 10 most-invested-in emerging technologies in the United States, as ranked by number of deals. If you want to get a sense of which technologies will be shaping our future in the years to come, this probably isn’t a bad starting point.

Read more
Nvidia lowers the barrier to entry into A.I. with Fleet Command and LaunchPad
laptop running Nvidia Fleet Command software.

Nvidia is expanding its artificial intelligence (A.I.) offerings as part of its continued effort to "democratize A.I." The company announced two new programs today that can help businesses of any size to train and deploy A.I. models without investing in infrastructure. The first is A.I. LaunchPad, which gives enterprises access to a stack of A.I. infrastructure and software, and the second is Fleet Command, which helps businesses deploy and manage the A.I. models they've trained.

At Computex 2021, Nvidia announced the Base Command platform that allows businesses to train A.I. models on Nvidia's DGX SuperPod supercomputer.  Fleet Command builds on this platform by allowing users to simulate A.I. models and deploy them across edge devices remotely. With an Nvidia-certified system, admins can now control the entire life cycle of A.I. training and edge deployment without the upfront cost.

Read more
IBM’s A.I. Mayflower ship is crossing the Atlantic, and you can watch it live
Mayflower Autonomous Ship alone in the ocean

“Seagulls,” said Andy Stanford-Clark, excitedly. “They’re quite a big obstacle from an image-processing point of view. But, actually, they’re not a threat at all. In fact, you can totally ignore them.”

Stanford-Clark, the chief technology officer for IBM in the U.K. and Ireland, was exuding nervous energy. It was the afternoon before the morning when, at 4 a.m. British Summer Time, IBM’s Mayflower Autonomous Ship — a crewless, fully autonomous trimaran piloted entirely by IBM's A.I., and built by non-profit ocean research company ProMare -- was set to commence its voyage from Plymouth, England. to Cape Cod, Massachusetts. ProMare's vessel for several years, alongside a global consortium of other partners. And now, after countless tests and hundreds of thousands of hours of simulation training, it was about to set sail for real.

Read more