Skip to main content

A.I. can remove distortions from underwater photos, streamlining ocean research

Light behaves differently in water than it does on the surface — and that behavior creates the blur or green tint common in underwater photographs as well as the haze that blocks out vital details. But thanks to research from an oceanographer and engineer and a new artificial intelligence program called Sea-Thru, that haze and those occluded colors could soon disappear.

Besides putting a downer on the photos from that snorkeling trip, the inability to get an accurately colored photo underwater hinders scientific research at a time when concern for coral and ocean health is growing. That’s why oceanographer and engineer Derya Akkaynak, along with Tali Treibitz and the University of Haifa, devoted their research to developing an artificial intelligence that can create scientifically accurate colors while removing the haze in underwater photos.

As Akkaynak points out in her research, imaging A.I. has exploded in recent years. Algorithms have been developed that can tackle everything from turning an apple into an orange to reversing manipulated photos. Yet, she says, the development of underwater algorithms is still behind because of how the water obscures many of the elements in the scene that the A.I. uses.

When light hits the water, it’s both absorbed and scattered. That creates what’s called backscatter, or haze that prevents the camera from seeing the scene in full detail. The light absorption also prevents color from reproducing accurately under water.

This researcher created an algorithm that removes the water from underwater images

To tackle the problem, Akkaynak trained the software using sets of underwater images the team shot themselves, using gear that’s readily available — a consumer camera, underwater housing, and a color card. First, she’d find a subject. In particular, Akkaynak was looking for coral with a lot of depth and dimension, since the farther away objects are underwater, the more those objects are obscured. Akkaynak would then place the color card near the coral, and then photograph the coral from both multiple distances and multiple angles.

Using those images as a data set, the researchers then trained the program to mathematically look at images and remove the backscatter and adjust the color, working on a pixel level. The resulting program, called Sea-thru, can correct the haze and color detail automatically. The software still requires multiple images of the same subject to work because the process uses a known range map to estimate and correct the backscatter. The researchers say, however, that the color card is no longer a necessity.

The resulting photos aren’t the same as the images that could be generated using tools like Lightroom’s dehaze slider and color correction tools. “This method is not Photoshopping an image,” Akkaynak told Scientific American. “It’s not enhancing or pumping up the colors in an image. It’s a physically accurate correction, rather than a visually pleasing modification.”

The team’s goal is to use large volumes of image data for research, explaining that, without the program, much of the work that requires color and detail must be done manually, since too many details are obscured in the photographs. “Sea-thru is a significant step towards opening up large underwater datasets to powerful computer vision and machine learning algorithms, and will help boost underwater research at a time when our oceans are [under] increasing stress from pollution, overfishing, and climate change,” the research paper concludes.

Editors' Recommendations

Hillary K. Grigonis
Hillary never planned on becoming a photographer—and then she was handed a camera at her first writing job and she's been…
An Amazon A.I. scientist wants to transform downtown Jackson, Mississippi
Nashlie Sephus

Most people look at a couple of vacant lots and see … vacant lots. But Nashlie Sephus sees gold.

Sephus, a 35-year-old Black A.I. researcher with Amazon, plans to turn seven buildings and about 500,000 square feet of downtown Jackson, Mississippi, into a technology park and incubator. Her story, as detailed on Inc.’s Web site, is remarkable:
The 35-year-old has spent the past four years splitting her time between Jackson, her hometown, and Atlanta, where she works as an applied science manager for Amazon's artificial intelligence initiative. Amazon had acquired Partpic, the visual recognition technology startup where she was chief technology officer, in 2016 for an undisclosed sum. In 2018, she founded the Bean Path, an incubator and technology consulting nonprofit in Jackson that she says has helped more than 400 local businesses and individuals with their tech needs.
But beyond entrepreneurship and deep A.I. know-how, Sephus is eager to bring tech to a city hardly known for its tech roots. "It's clear that people don't expect anything good to come from Jackson," she told Inc. "So it's up to us to build something for our hometown, something for the people coming behind us."

Read more
The BigSleep A.I. is like Google Image Search for pictures that don’t exist yet
Eternity

In case you’re wondering, the picture above is "an intricate drawing of eternity." But it’s not the work of a human artist; it’s the creation of BigSleep, the latest amazing example of generative artificial intelligence (A.I.) in action.

A bit like a visual version of text-generating A.I. model GPT-3, BigSleep is capable of taking any text prompt and visualizing an image to fit the words. That could be something esoteric like eternity, or it could be a bowl of cherries, or a beautiful house (the latter of which can be seen below.) Think of it like a Google Images search -- only for pictures that have never previously existed.
How BigSleep works
“At a high level, BigSleep works by combining two neural networks: BigGAN and CLIP,” Ryan Murdock, BigSleep’s 23-year-old creator, a student studying cognitive neuroscience at the University of Utah, told Digital Trends.

Read more
Clever new A.I. system promises to train your dog while you’re away from home
finding rover facial recognition app dog face big eyes

One of the few good things about lockdown and working from home has been having more time to spend with pets. But when the world returns to normal, people are going to go back to the office, and in some cases that means leaving dogs at home for a large part of the day, hopefully with someone coming into your house to let them out at the midday point.

What if it was possible for an A.I. device, like a next-generation Amazon Echo, to give your pooch a dog-training class while you were away? That’s the basis for a project carried out by researchers at Colorado State University. Initially spotted by Chris Stokel-Walker, author of YouTubers:How YouTube Shook Up TV and Created a New Generation of Stars, and reported by New Scientist, the work involves a prototype device that’s able to give out canine commands, check to see if they’re being obeyed, and then provide a treat as a reward when they are.

Read more