Skip to main content

How a clever photography trick is bringing Seattle’s shipwrecks to the surface

photogrammetry underwater wrecks global underwater explorers
Kees Beemster Leverenz

Seattle is an isthmus.

On the east side of the city lies freshwater Lake Washington, while on the west you’ll find the salty waters of the Puget Sound. Created when a glacier inched its way across the land thousands of years ago, Lake Washington is home to algae, zooplankton, and some PCB-contaminated fish. Thanks to its ocean access, the Sound is occasionally visited by orcas.

At the bottom of these two bodies of water, however, the landscape starts to change. Divers have found swords, tequila bottles, bags of garbage, and old laptops. There’s also more historically significant objects, too, like planes and shipwrecks.

Even for those with the gear and training to dive over 100 feet in frigid water, getting a sense for what these wrecks really look like can be a challenge. “The visibility is quite poor, so we’re not able to see very far,” Kees Beemster Leverenz told Digital Trends. “And on top of that, there isn’t almost any light that penetrates down past the first couple of dozen feet, maybe 70 feet or so.” Beemster Leverenz is a Microsoft software developer by day, diver by night and on many weekends. He’s part of the Global Underwater Explorers (GUE), a nonprofit that educates divers and helps conserve aquatic environments. Using photogrammetry, he hopes to bring some of these sunken vessels to surface in the form of 3D models.

Mars attacked

In 2011, a team that included some GUE divers located the Mars in the Baltic Sea. Sunk during a battle in 1564, the Swedish warship could hold as many as 900 sailors. It’s massive and, thanks to the dark, cold Nordic waters, pretty well preserved. There’s no way to recover the 200-foot, three-masted ship, but researchers were excited to learn more about the famed wreck. Instead of sending a bunch of scientists 250 feet below, they devised a way to bring the ship to life with photogrammetry.

GUE need four 33,000-lumen light bars to even make a dent in the dimness 100+ feet from the surface.

By taking laser scans and thousands of photos of the planks, cannons, masts, and so on, Professor Johan Rönnby of Södertörn University and his team were able to capture the ship from every angle. Then, software pieces the photos together to make a 3D model that researchers can spin and zoom in on, giving them the ability to see details but also get a sense of how the ship looked when it was whole.

When Beemster Leverenz heard about the Mars project, he decided to use some of the techniques on Seattle-area wrecks. There were plenty to choose from. In Lake Washington alone, there are at least seven plane wrecks, a dozen coal cars that slid overboard a barge, and hundreds of boats. Over the decades, divers have discovered many of them, guided by the National Oceanic and Atmospheric Administration sonar data.

Deep, dark sea

Like the Baltic, Lake Washington is dark and chilly. It’s also full of sediment. Stir up the muck on the bottom, and you might as well surface for the day. Your photos are just going to show cloudy water, cast greenish yellow by the light.

Conditions in Lake Crescent, about 100 miles northwest of Seattle, are very different from Lake Washington. Thanks to the clear water and ambient light, Kathryn Arant, another GUE diver, was able to quickly snap the 200 or so images needed for the photogrammetry of a 1927 Chevrolet lying on its side in 170 feet of water.

[iframe-embed url=”https://sketchfab.com/models/805e79f2ab444e0a8574e3d384e217e0/embed?autostart=1&autospin=0.1″ size=”xlarge” height=”500px”]
A 3D model of the Warren car in Washington’s Lake Crescent gives viewers a look at a recently solved mystery. Kees Beemster Leverenz

The car was first found in 2002, solving the mystery of what happened to a young couple, Russell and Blanch Warren, who went missing in 1929. Because of the winding, unpaved roads around Lake Crescent, it had been assumed their car went into the water. With Arant’s images and the Agisoft Photoscan software, the result is a model that shows the Warren car down to speedometer and still-inflated tires.

The car was one of GUE Seattle’s first attempts at photogrammetry. It took Beemster Leverenz and his fellow divers a few tries to get the hang of the process. They started out using GoPros, protected with underwater housing. Quickly, they realized they needed better cameras and more light. They purchased 33,000-lumen light bars that will dazzle you if you look into them when turning them on. Despite their intense brightness, they need four to even make a dent in the dimness over 100 feet from the surface. “We’re able to turn what appears to be really bad visibility into so-so visibility,” said Beemster Leverenz.

Connect the dots

“I like to say that the easiest thing that you could ever document is a dome that doesn’t have any little bits that stick out, that has no wings or propellers to make things difficult,” said Beemster Leverenz. The Warren car was pretty close. Planes are harder. Divers need to balance getting all the details with not overwhelming the software. “It’s important to be frugal where you can with photos,” he said.

For one plane wreck, a PBM Mariner, the GUE team took about 5,500 photos. There’s only one of these planes left intact — above sea level, anyway —- at the Pima Air and Space Museum in Arizona. The flying boat was difficult to transport on land, so most were scrapped. One sunk in Lake Washington in 1949. Navy divers tried to surface the plane in the 1990s but only succeeded in breaking the tail off. The majority is still about 70 feet underwater.

It’s also virtually in the Pima museum, thanks to the GUE’s photogrammetry efforts. Along with Dr. Megan Lickliter-Mundon, an underwater aviation archaeologists, they created a 3D model of the rare plane, which sits alongside the recovered tail.

Those party cups are ubiquitous, showing up in as specks of red in some of GUE’s photogrammetry models.

Recreating wrecks like the PBM Mariner and another sunken plane, the PB4Y-2, requires a lot of photos, which in turn takes a lot of processing power. First, the software analyzes the photos and starts lining them up. It recognizes certain objects, a rudder, a wing flap, and starts mapping them out, using photos of the same object taken at different angles. This is called a point cloud, which Beemster Leverenz compares to connect the dots. The shape is there; it’s just not filled in.

Next, the computer connects those dots into a mesh. “The mesh doesn’t actually have the color to it,” he said. “It’s really similar [to] putting together a plastic model before you painted it.” The white mesh looks like a plane, but it doesn’t have the details and definition needed to distinguish certain parts. The third step is to layer the details from the photos on top of the mesh, the sort of “color in” process.

For GUE’s latest project, the PB4Y-2, Beemster Leverenz was able to recruit a non-diver to help. Patrick Goodwin works for Dice, which makes the Battlefield video game series. He and Beemster Leverenz have a mutual friend and happened to start discussing photogrammetry via voice chat while playing a video game together. Dice uses photogrammetry to realistically bring real-world objects and places — like the Alps — into games. Goodwin optimizes models to make them wieldy. If they’re overly detailed, they’ll be too overloaded with data to spin and allow you to see the wreck from every angle. The plane’s rivets, for example, don’t need to be built into the model when they can be projected on top instead. It’s like the difference between painting individual stripes or slapping on a decal.

Render of Sunken World War II-era patrol bomber, the Consolidated PB4Y-2 Privateer.
Using a photogrammetry, a technique for extracting 3D information from photographs, Beemster Leverns and Dice developer Patrick Goodwin were able to generate a high-quality 3D model of a sunken Consolidated PB4Y-2 Privateer. Kees Beemster Leverenz and Patrick Goodwin

In addition, Goodwin is helping render some of the environment around the wreck. “If you want to make a model of a blank white room, you can’t do it,” Beemster Leverenz said. The software needs contrast to create the model. The plane itself has that, but the ground it rests on doesn’t. “It’s just sort of a flat greenish, yellowish nothingness,” he said. But it’s necessary to provide context. Without it, “you end up with a model of an airplane that doesn’t look like it’s actually been crash into anything,” he added. Sometimes the contrast comes from unexpected places — a crinkled Target bag or a red solo cup. Those party cups are ubiquitous, showing up in as specks of red in some of GUE’s photogrammetry models.

Though everyone survived both the PB4Y-2 and PBM Mariner sinkings, the fact that man-made objects litter these aquatic floors is depressing — even if they are being reclaimed by marine life. There are ways to use photogrammetry to help nature as well, Beemster Leverenz said. The Marine and Science Technology Center in Des Moines, Washington has considered creating an artificial reef in Puget Sound — to replace waterlogged VW Beetles and other substitute marine environments. Photogrammetry could be a non-destructive way to measure the reef’s growth over time. Hopefully, it will stay free of solo cups.

Editors' Recommendations

Jenny McGrath
Former Digital Trends Contributor
Jenny McGrath is a senior writer at Digital Trends covering the intersection of tech and the arts and the environment. Before…
Digital Trends’ Top Tech of CES 2023 Awards
Best of CES 2023 Awards Our Top Tech from the Show Feature

Let there be no doubt: CES isn’t just alive in 2023; it’s thriving. Take one glance at the taxi gridlock outside the Las Vegas Convention Center and it’s evident that two quiet COVID years didn’t kill the world’s desire for an overcrowded in-person tech extravaganza -- they just built up a ravenous demand.

From VR to AI, eVTOLs and QD-OLED, the acronyms were flying and fresh technologies populated every corner of the show floor, and even the parking lot. So naturally, we poked, prodded, and tried on everything we could. They weren’t all revolutionary. But they didn’t have to be. We’ve watched enough waves of “game-changing” technologies that never quite arrive to know that sometimes it’s the little tweaks that really count.

Read more
Digital Trends’ Tech For Change CES 2023 Awards
Digital Trends CES 2023 Tech For Change Award Winners Feature

CES is more than just a neon-drenched show-and-tell session for the world’s biggest tech manufacturers. More and more, it’s also a place where companies showcase innovations that could truly make the world a better place — and at CES 2023, this type of tech was on full display. We saw everything from accessibility-minded PS5 controllers to pedal-powered smart desks. But of all the amazing innovations on display this year, these three impressed us the most:

Samsung's Relumino Mode
Across the globe, roughly 300 million people suffer from moderate to severe vision loss, and generally speaking, most TVs don’t take that into account. So in an effort to make television more accessible and enjoyable for those millions of people suffering from impaired vision, Samsung is adding a new picture mode to many of its new TVs.
[CES 2023] Relumino Mode: Innovation for every need | Samsung
Relumino Mode, as it’s called, works by adding a bunch of different visual filters to the picture simultaneously. Outlines of people and objects on screen are highlighted, the contrast and brightness of the overall picture are cranked up, and extra sharpness is applied to everything. The resulting video would likely look strange to people with normal vision, but for folks with low vision, it should look clearer and closer to "normal" than it otherwise would.
Excitingly, since Relumino Mode is ultimately just a clever software trick, this technology could theoretically be pushed out via a software update and installed on millions of existing Samsung TVs -- not just new and recently purchased ones.

Read more
AI turned Breaking Bad into an anime — and it’s terrifying
Split image of Breaking Bad anime characters.

These days, it seems like there's nothing AI programs can't do. Thanks to advancements in artificial intelligence, deepfakes have done digital "face-offs" with Hollywood celebrities in films and TV shows, VFX artists can de-age actors almost instantly, and ChatGPT has learned how to write big-budget screenplays in the blink of an eye. Pretty soon, AI will probably decide who wins at the Oscars.

Within the past year, AI has also been used to generate beautiful works of art in seconds, creating a viral new trend and causing a boon for fan artists everywhere. TikTok user @cyborgism recently broke the internet by posting a clip featuring many AI-generated pictures of Breaking Bad. The theme here is that the characters are depicted as anime characters straight out of the 1980s, and the result is concerning to say the least. Depending on your viewpoint, Breaking Bad AI (my unofficial name for it) shows how technology can either threaten the integrity of original works of art or nurture artistic expression.
What if AI created Breaking Bad as a 1980s anime?
Playing over Metro Boomin's rap remix of the famous "I am the one who knocks" monologue, the video features images of the cast that range from shockingly realistic to full-on exaggerated. The clip currently has over 65,000 likes on TikTok alone, and many other users have shared their thoughts on the art. One user wrote, "Regardless of the repercussions on the entertainment industry, I can't wait for AI to be advanced enough to animate the whole show like this."

Read more