Skip to main content

Gory but good: Company plans virtual cadaver software

Dead bodies are in short supply, a fact that might surprise you unless you’ve been through medical school or dissected a corpse. The medical cadaver market is has obvious challenges: Not everyone wants to donate their body to be sliced, carved, drilled, tagged, and documented before finally left to rest in peace, after all.

Stiffs are also expensive to care for—they have to be refrigerated, which requires special equipment and trained staff. Heap on a little bad publicity about the organ-donor trade and you’ve got an environment Dr. Frankenstein would find a challenge.

Recommended Videos

So to solve the shortage of real dead folks, anatomists decided to create virtual ones.

An early pioneer of such virtual simulation was Norman Eizenberg, now an associate professor at the Department of Anatomy and Developmental Biology at Monash University in Australia. Over 20 years ago, he began collecting dissection data and storing it in an electronic form. That resulted in the creation of An@omedia’s virtual dissection software, which tremendously sped up the traditional process of learning human anatomy.

According to Eizenberg, the ratio of students to cadavers at his school is 80 to one, and slicing up a stiff is a slow meticulous process.

“You can’t just take a knife and fork and start cutting. You need to clear away fat, clear away fibers—all the tissues that hold us together.”

“You can’t just take a knife and fork and start cutting,” he told Digital Trends. “You need to make the dissections on the cadaver and clear away fat, clear away fibers—all the tissues that hold us together.” In a regular medical school setting it would take a student several days to accomplish what takes them a few clicks on a computer—each screen on Anatomedia represents a week worth of dissection. “On the screen it would take seconds to go to the next level,” Eizenberg says.

And that’s just the start; others want to push it even further. Robert Rice is a former NASA consultant who had built virtual astronauts for the agency and holds a Ph.D. in anatomy, while Peter Moon is the CEO of Baltech, a sensing and simulation technology company in Australia. They want to create a 3-D virtual human whose anatomy the students will be able to not only see but actually feel. And they’re not talking about a plastic imitation body made from synthetic tissues. They’re talking about a haptic, computerized human model the aspiring medics will be able to slice away on a computer screen while experiencing the sensation of cutting through the skin, pushing away fat and uncovering blood vessels. Moon calls is “putting technologies and innovation together to create a new norm.”

How does one build a tactile experience for dissecting a human body on a flat piece of glass?

The idea may sound far out, but each and every one of us is already using haptic electronic devices—the touchscreens on our smartphones and tablets that vibrate when we type a phone number or text a friend. That glass can respond to your taps only in a simple way—it can’t convey the flexibility or density of what you’re touching. But other, more advanced and sophisticated haptic devices can do that, and they already exist. Such devices can create the sense of touch by applying forces, vibrations or motions to their user. This mechanical stimulation helps create haptic virtual objects (HVOs) in a computer simulation.

Image used with permission by copyright holder

With the help of the tactile device as an intermediary, the users can manipulate HVOs on the screen while experiencing them as if they’re real. The concept is similar to the flight simulator a student pilot may use, where simple controls such as a joystick let her fly a virtual plane. The haptic human will be far more sophisticated, allowing student doctors to perform virtual dissections and surgeries.

“We’ll offer multi-touch, both-hands haptics which invokes the remarkable human sense of touch, sensitivity and meaning,” Rice says.

He has already laid out a roadmap to the haptic human. Anatomedia has the database of photos and scans depicting various body parts, bones, muscles and tissues. Using a haptic programming language such as H3DAPI—an open source software development platform—programmers can assign tactile qualities to the Anatomedia objects and make them respond to the movements of the student’s virtual scalpel just like they would in a real life. Such tactile qualities can be stiffness, deformability or various textures.

“You will feel the texture of skin, the firmness of an athletic muscle or the flabbiness of belly fat, the rigidity of your bony elbow or the pulsatile flow of blood at your wrist pulse point,” says Rice. All of the physical properties that exist in the world are built into the haptic programming language.

Aspiring medics will be able to slice away on a computer screen while experiencing the sensation of cutting through the skin, pushing away fat and uncovering blood vessels.

The computerized version of a patient can also take a basic anatomy lessons to the next level and portray how organs looks and feel when they’re damaged—the software would presents fractured bones, swollen muscles or grown tumors in a visual and tactile way. That means touching a bicep in an operating room would feel different than touching it in an autopsy room because the haptic programming language allows for that. “Our virtual anatomy becomes a unique virtual patient available for the ‘laying on of hands’ to detect and diagnose,” Rice says.

Rice and Moon titled their virtual human software Interactive Human Anatomy Visualization Instructional Technology—or simply IHAVIT.

So when can we expect the computer cadavers to replace the real ones, making the anatomy labs obsolete? Once the project is funded, says Rice, it would take about three to four years to program such virtual human. But the task requires a significant investment—more than a typical IndieGoGo campaign can amass. “If we do the human arm as a proof of concept,” Rice says, “we’re looking for a budget of three quarters of a million dollars and we would deliver it in 12 months.”

To build the rest would take 36 to 48 months, Rice estimates, and would cost $15 million as a ballpark figure — with a state-of-the-art version adding up to $24 million. If it sounds like a lot, dig this: to run a mid-size cadaver lab costs a medical school about $3 to $4 million a year. If a handful of medical schools pitch in for the idea, in a few years they’ll be saving those millions. And they would no longer have to deal with the dead-people problems such as shipping, preserving, returning and cremating. “It would probably reduce the overall cost of medical education,” Rice says.

But the team yet has to find the investor to back the project—that would take someone like Elon Musk, Bill Gates, Mark Cuban or Mark Zuckerberg, Rice says. “The individual needs to be a champion of our opportunity to integrate advanced technology with traditional healthcare,” he says. “We need to touch the mind and heart of an investor inspired to support this opportunity.”

Lina Zeldovich
Former Digital Trends Contributor
Lina Zeldovich lives in New York and writes about science, health, food and ecology. She has contributed to Newsweek…
Star Wars legend Ian McDiarmid gets questions about the Emperor’s sex life
Ian McDiarmid as the Emperor in Star Wars: The Rise of Skywalker.

This weekend, the Star Wars: Revenge of the Sith 20th anniversary re-release had a much stronger performance than expected with $25 million and a second-place finish behind Sinners. Revenge of the Sith was the culmination of plans by Chancellor Palpatine (Ian McDiarmid) that led to the fall of the Jedi and his own ascension to emperor. Because McDiarmid's Emperor died in his first appearance -- 1983's Return of the Jedi -- Revenge of the Sith was supposed to be his live-action swan song. However, Palpatine's return in Star Wars: Episode IX -- The Rise of Skywalker left McDiarmid being asked questions about his character's comeback, particularly about his sex life and how he could have a granddaughter.

While speaking with Variety, McDiarmid noted that fans have asked him "slightly embarrassing questions" about Palpatine including "'Does this evil monster ever have sex?'"

Read more
Waymo and Toyota explore personally owned self-driving cars
Front three quarter view of the 2023 Toyota bZ4X.

Waymo and Toyota have announced they’re exploring a strategic collaboration—and one of the most exciting possibilities on the table is bringing fully-automated driving technology to personally owned vehicles.
Alphabet-owned Waymo has made its name with its robotaxi service, the only one currently operating in the U.S. Its vehicles, including Jaguars and Hyundai Ioniq 5s, have logged tens of millions of autonomous miles on the streets of San Francisco, Los Angeles, Phoenix, and Austin.
But shifting to personally owned self-driving cars is a much more complex challenge.
While safety regulations are expected to loosen under the Trump administration, the National Highway Traffic Safety Administration (NHTSA) has so far taken a cautious approach to the deployment of fully autonomous vehicles. General Motors-backed Cruise robotaxi was forced to suspend operations in 2023 following a fatal collision.
While the partnership with Toyota is still in the early stages, Waymo says it will initially study how to merge its autonomous systems with the Japanese automaker’s consumer vehicle platforms.
In a recent call with analysts, Alphabet CEO Sundar Pichai signaled that Waymo is seriously considering expanding beyond ride-hailing fleets and into personal ownership. While nothing is confirmed, the partnership with Toyota adds credibility—and manufacturing muscle—to that vision.
Toyota brings decades of safety innovation to the table, including its widely adopted Toyota Safety Sense technology. Through its software division, Woven by Toyota, the company is also pushing into next-generation vehicle platforms. With Waymo, Toyota is now also looking at how automation can evolve beyond assisted driving and into full autonomy for individual drivers.
This move also turns up the heat on Tesla, which has long promised fully self-driving vehicles for consumers. While Tesla continues to refine its Full Self-Driving (FSD) software, it remains supervised and hasn’t yet delivered on full autonomy. CEO Elon Musk is promising to launch some of its first robotaxis in Austin in June.
When it comes to self-driving cars, Waymo and Tesla are taking very different roads. Tesla aims to deliver affordability and scale with its camera, AI-based software. Waymo, by contrast, uses a more expensive technology relying on pre-mapped roads, sensors, cameras, radar and lidar (a laser-light radar), that regulators have been quicker to trust.

Read more
Uber partners with May Mobility to bring thousands of autonomous vehicles to U.S. streets
uber may mobility av rides partnership

The self-driving race is shifting into high gear, and Uber just added more horsepower. In a new multi-year partnership, Uber and autonomous vehicle (AV) company May Mobility will begin rolling out driverless rides in Arlington, Texas by the end of 2025—with thousands more vehicles planned across the U.S. in the coming years.
Uber has already taken serious steps towards making autonomous ride-hailing a mainstream option. The company already works with Waymo, whose robotaxis are live in multiple cities, and now it’s welcoming May Mobility’s hybrid-electric Toyota Sienna vans to its platform. The vehicles will launch with safety drivers at first but are expected to go fully autonomous as deployments mature.
May Mobility isn’t new to this game. Backed by Toyota, BMW, and other major players, it’s been running AV services in geofenced areas since 2021. Its AI-powered Multi-Policy Decision Making (MPDM) tech allows it to react quickly and safely to unpredictable real-world conditions—something that’s helped it earn trust in city partnerships across the U.S. and Japan.
This expansion into ride-hailing is part of a broader industry trend. Waymo, widely seen as the current AV frontrunner, continues scaling its service in cities like Phoenix and Austin. Tesla, meanwhile, is preparing to launch its first robotaxis in Austin this June, with a small fleet of Model Ys powered by its camera-based Full Self-Driving (FSD) system. While Tesla aims for affordability and scale, Waymo and May are focused on safety-first deployments using sensor-rich systems, including lidar—a tech stack regulators have so far favored.
Beyond ride-hailing, the idea of personally owned self-driving cars is also gaining traction. Waymo and Toyota recently announced they’re exploring how to bring full autonomy to private vehicles, a move that could eventually bring robotaxi tech right into your garage.
With big names like Uber, Tesla, Waymo, and now May Mobility in the mix, the ride-hailing industry is evolving fast—and the road ahead looks increasingly driver-optional.

Read more