Skip to main content

Gory but good: Company plans virtual cadaver software

dead people problem doctors crave virtual cadavers dissection software header
Image used with permission by copyright holder
Dead bodies are in short supply, a fact that might surprise you unless you’ve been through medical school or dissected a corpse. The medical cadaver market is has obvious challenges: Not everyone wants to donate their body to be sliced, carved, drilled, tagged, and documented before finally left to rest in peace, after all.

Stiffs are also expensive to care for—they have to be refrigerated, which requires special equipment and trained staff. Heap on a little bad publicity about the organ-donor trade and you’ve got an environment Dr. Frankenstein would find a challenge.

So to solve the shortage of real dead folks, anatomists decided to create virtual ones.

An early pioneer of such virtual simulation was Norman Eizenberg, now an associate professor at the Department of Anatomy and Developmental Biology at Monash University in Australia. Over 20 years ago, he began collecting dissection data and storing it in an electronic form. That resulted in the creation of An@omedia’s virtual dissection software, which tremendously sped up the traditional process of learning human anatomy.

According to Eizenberg, the ratio of students to cadavers at his school is 80 to one, and slicing up a stiff is a slow meticulous process.

“You can’t just take a knife and fork and start cutting. You need to clear away fat, clear away fibers—all the tissues that hold us together.”

“You can’t just take a knife and fork and start cutting,” he told Digital Trends. “You need to make the dissections on the cadaver and clear away fat, clear away fibers—all the tissues that hold us together.” In a regular medical school setting it would take a student several days to accomplish what takes them a few clicks on a computer—each screen on Anatomedia represents a week worth of dissection. “On the screen it would take seconds to go to the next level,” Eizenberg says.

And that’s just the start; others want to push it even further. Robert Rice is a former NASA consultant who had built virtual astronauts for the agency and holds a Ph.D. in anatomy, while Peter Moon is the CEO of Baltech, a sensing and simulation technology company in Australia. They want to create a 3-D virtual human whose anatomy the students will be able to not only see but actually feel. And they’re not talking about a plastic imitation body made from synthetic tissues. They’re talking about a haptic, computerized human model the aspiring medics will be able to slice away on a computer screen while experiencing the sensation of cutting through the skin, pushing away fat and uncovering blood vessels. Moon calls is “putting technologies and innovation together to create a new norm.”

How does one build a tactile experience for dissecting a human body on a flat piece of glass?

The idea may sound far out, but each and every one of us is already using haptic electronic devices—the touchscreens on our smartphones and tablets that vibrate when we type a phone number or text a friend. That glass can respond to your taps only in a simple way—it can’t convey the flexibility or density of what you’re touching. But other, more advanced and sophisticated haptic devices can do that, and they already exist. Such devices can create the sense of touch by applying forces, vibrations or motions to their user. This mechanical stimulation helps create haptic virtual objects (HVOs) in a computer simulation.

Image used with permission by copyright holder

With the help of the tactile device as an intermediary, the users can manipulate HVOs on the screen while experiencing them as if they’re real. The concept is similar to the flight simulator a student pilot may use, where simple controls such as a joystick let her fly a virtual plane. The haptic human will be far more sophisticated, allowing student doctors to perform virtual dissections and surgeries.

“We’ll offer multi-touch, both-hands haptics which invokes the remarkable human sense of touch, sensitivity and meaning,” Rice says.

He has already laid out a roadmap to the haptic human. Anatomedia has the database of photos and scans depicting various body parts, bones, muscles and tissues. Using a haptic programming language such as H3DAPI—an open source software development platform—programmers can assign tactile qualities to the Anatomedia objects and make them respond to the movements of the student’s virtual scalpel just like they would in a real life. Such tactile qualities can be stiffness, deformability or various textures.

“You will feel the texture of skin, the firmness of an athletic muscle or the flabbiness of belly fat, the rigidity of your bony elbow or the pulsatile flow of blood at your wrist pulse point,” says Rice. All of the physical properties that exist in the world are built into the haptic programming language.

Aspiring medics will be able to slice away on a computer screen while experiencing the sensation of cutting through the skin, pushing away fat and uncovering blood vessels.

The computerized version of a patient can also take a basic anatomy lessons to the next level and portray how organs looks and feel when they’re damaged—the software would presents fractured bones, swollen muscles or grown tumors in a visual and tactile way. That means touching a bicep in an operating room would feel different than touching it in an autopsy room because the haptic programming language allows for that. “Our virtual anatomy becomes a unique virtual patient available for the ‘laying on of hands’ to detect and diagnose,” Rice says.

Rice and Moon titled their virtual human software Interactive Human Anatomy Visualization Instructional Technology—or simply IHAVIT.

So when can we expect the computer cadavers to replace the real ones, making the anatomy labs obsolete? Once the project is funded, says Rice, it would take about three to four years to program such virtual human. But the task requires a significant investment—more than a typical IndieGoGo campaign can amass. “If we do the human arm as a proof of concept,” Rice says, “we’re looking for a budget of three quarters of a million dollars and we would deliver it in 12 months.”

To build the rest would take 36 to 48 months, Rice estimates, and would cost $15 million as a ballpark figure — with a state-of-the-art version adding up to $24 million. If it sounds like a lot, dig this: to run a mid-size cadaver lab costs a medical school about $3 to $4 million a year. If a handful of medical schools pitch in for the idea, in a few years they’ll be saving those millions. And they would no longer have to deal with the dead-people problems such as shipping, preserving, returning and cremating. “It would probably reduce the overall cost of medical education,” Rice says.

But the team yet has to find the investor to back the project—that would take someone like Elon Musk, Bill Gates, Mark Cuban or Mark Zuckerberg, Rice says. “The individual needs to be a champion of our opportunity to integrate advanced technology with traditional healthcare,” he says. “We need to touch the mind and heart of an investor inspired to support this opportunity.”

Lina Zeldovich
Lina Zeldovich lives in New York and writes about science, health, food and ecology. She has contributed to Newsweek…
Digital Trends’ Top Tech of CES 2023 Awards
Best of CES 2023 Awards Our Top Tech from the Show Feature

Let there be no doubt: CES isn’t just alive in 2023; it’s thriving. Take one glance at the taxi gridlock outside the Las Vegas Convention Center and it’s evident that two quiet COVID years didn’t kill the world’s desire for an overcrowded in-person tech extravaganza -- they just built up a ravenous demand.

From VR to AI, eVTOLs and QD-OLED, the acronyms were flying and fresh technologies populated every corner of the show floor, and even the parking lot. So naturally, we poked, prodded, and tried on everything we could. They weren’t all revolutionary. But they didn’t have to be. We’ve watched enough waves of “game-changing” technologies that never quite arrive to know that sometimes it’s the little tweaks that really count.

Read more
Digital Trends’ Tech For Change CES 2023 Awards
Digital Trends CES 2023 Tech For Change Award Winners Feature

CES is more than just a neon-drenched show-and-tell session for the world’s biggest tech manufacturers. More and more, it’s also a place where companies showcase innovations that could truly make the world a better place — and at CES 2023, this type of tech was on full display. We saw everything from accessibility-minded PS5 controllers to pedal-powered smart desks. But of all the amazing innovations on display this year, these three impressed us the most:

Samsung's Relumino Mode
Across the globe, roughly 300 million people suffer from moderate to severe vision loss, and generally speaking, most TVs don’t take that into account. So in an effort to make television more accessible and enjoyable for those millions of people suffering from impaired vision, Samsung is adding a new picture mode to many of its new TVs.
[CES 2023] Relumino Mode: Innovation for every need | Samsung
Relumino Mode, as it’s called, works by adding a bunch of different visual filters to the picture simultaneously. Outlines of people and objects on screen are highlighted, the contrast and brightness of the overall picture are cranked up, and extra sharpness is applied to everything. The resulting video would likely look strange to people with normal vision, but for folks with low vision, it should look clearer and closer to "normal" than it otherwise would.
Excitingly, since Relumino Mode is ultimately just a clever software trick, this technology could theoretically be pushed out via a software update and installed on millions of existing Samsung TVs -- not just new and recently purchased ones.

Read more
AI turned Breaking Bad into an anime — and it’s terrifying
Split image of Breaking Bad anime characters.

These days, it seems like there's nothing AI programs can't do. Thanks to advancements in artificial intelligence, deepfakes have done digital "face-offs" with Hollywood celebrities in films and TV shows, VFX artists can de-age actors almost instantly, and ChatGPT has learned how to write big-budget screenplays in the blink of an eye. Pretty soon, AI will probably decide who wins at the Oscars.

Within the past year, AI has also been used to generate beautiful works of art in seconds, creating a viral new trend and causing a boon for fan artists everywhere. TikTok user @cyborgism recently broke the internet by posting a clip featuring many AI-generated pictures of Breaking Bad. The theme here is that the characters are depicted as anime characters straight out of the 1980s, and the result is concerning to say the least. Depending on your viewpoint, Breaking Bad AI (my unofficial name for it) shows how technology can either threaten the integrity of original works of art or nurture artistic expression.
What if AI created Breaking Bad as a 1980s anime?
Playing over Metro Boomin's rap remix of the famous "I am the one who knocks" monologue, the video features images of the cast that range from shockingly realistic to full-on exaggerated. The clip currently has over 65,000 likes on TikTok alone, and many other users have shared their thoughts on the art. One user wrote, "Regardless of the repercussions on the entertainment industry, I can't wait for AI to be advanced enough to animate the whole show like this."

Read more