Skip to main content

This A.I. eavesdrops on emergency calls to warn of possible cardiac arrests

ai listens for cardiac arrest gettyimages 187137132
Ariel Skelley/Getty Images
When you phone 911, you’re patched through to a trained human who is able to properly triage your phone call. Soon, you could also find yourself being listened to by a robot, however, who is tuning in to very different verbal information from the human emergency dispatcher.

Developed by Danish startup Corti, this emergency call-listening artificial intelligence is designed to listen to the caller for signs that they may be about to go into cardiac arrest. When it makes such a diagnosis, it then alerts the human dispatcher so that they can take the proper steps.

“Corti is meant to be a digital co-pilot for medical personnel,” Andreas Cleve, CEO of Corti, told Digital Trends. “Like a human doctor, Corti analyzes everything a patient says and shares in real time — from journal data, symptom descriptions, voice data, acoustic data, language data, their dialect, questions, and even their breathing patterns. Corti then outputs diagnostic advice to the medical personnel, to help them diagnose patients faster. This can be especially powerful in an emergency use case where mistakes can be fatal.”

As the company’s Chief Technology Officer Lars Maaloe told us, the technology framework uses deep learning neural networks trained on years of historical emergency calls. While it hasn’t yet been peer-reviewed, the team is currently working on this. A paper describing the work is likely to published later in 2018.

“Today the technology is being used in Copenhagen EMS, who have spearheaded the application of machine learning in the prehospital space worldwide,” Cleve said. “At Copenhagen EMS, our technology is able to give emergency call takers diagnostic advice in natural language, and it’s integrated directly into the software they are already using. Our goal is to make it easier for medical personnel to do their jobs, not complicate it further with fancier technology. We are extremely skeptical of the idea of rushing to replace trained medical personnel with A.I., since from both ethical and professional perspective we prefer human contact when it comes to our health. Personally, I simply can’t see myself preferring a bot over a medically trained human agent. But the setup where humans are amplified by A.I.? That to us is a far more powerful scenario in healthcare.”

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Algorithmic architecture: Should we let A.I. design buildings for us?
Generated Venice cities

Designs iterate over time. Architecture designed and built in 1921 won’t look the same as a building from 1971 or from 2021. Trends change, materials evolve, and issues like sustainability gain importance, among other factors. But what if this evolution wasn’t just about the types of buildings architects design, but was, in fact, key to how they design? That’s the promise of evolutionary algorithms as a design tool.

While designers have long since used tools like Computer Aided Design (CAD) to help conceptualize projects, proponents of generative design want to go several steps further. They want to use algorithms that mimic evolutionary processes inside a computer to help design buildings from the ground up. And, at least when it comes to houses, the results are pretty darn interesting.
Generative design
Celestino Soddu has been working with evolutionary algorithms for longer than most people working today have been using computers. A contemporary Italian architect and designer now in his mid-70s, Soddu became interested in the technology’s potential impact on design back in the days of the Apple II. What interested him was the potential for endlessly riffing on a theme. Or as Soddu, who is also professor of generative design at the Polytechnic University of Milan in Italy, told Digital Trends, he liked the idea of “opening the door to endless variation.”

Read more
Emotion-sensing A.I. is here, and it could be in your next job interview
man speaking into phone

I vividly remember witnessing speech recognition technology in action for the first time. It was in the mid-1990s on a Macintosh computer in my grade school classroom. The science fiction writer Arthur C. Clarke once wrote that “any sufficiently advanced technology is indistinguishable from magic” -- and this was magical all right, seeing spoken words appearing on the screen without anyone having to physically hammer them out on a keyboard.

Jump forward another couple of decades, and now a large (and rapidly growing) number of our devices feature A.I. assistants like Apple’s Siri or Amazon’s Alexa. These tools, built using the latest artificial intelligence technology, aren’t simply able to transcribe words -- they are able to make sense of their contents to carry out actions.

Read more
Language supermodel: How GPT-3 is quietly ushering in the A.I. revolution
Profile of head on computer chip artificial intelligence.

OpenAI’s GPT-2 text-generating algorithm was once considered too dangerous to release. Then it got released -- and the world kept on turning.

In retrospect, the comparatively small GPT-2 language model (a puny 1.5 billion parameters) looks paltry next to its sequel, GPT-3, which boasts a massive 175 billion parameters, was trained on 45 TB of text data, and cost a reported $12 million (at least) to build.

Read more