Skip to main content

Catdiology? Cat pictures are helping AI get better at recognizing X-rays

cat photos xrays 19037264 l
Sergey Nivens/123RF
It’s easy to joke that the internet was invented to give people around the world the opportunity to share pictures of cats. However, according to a new report, those kitty pictures may one day turn out to save your life.

That is based on work being done by Dr. Alvin Rajkomar, an assistant professor at the University of California, San Francisco Medical Center. Rajkomar trained a deep learning neural network to be able to automatically detect life-threatening abnormalities in chest X-rays.

“When I was a medical resident, I ordered a stat X-ray of a patient who I suspected had a life-threatening pneumothorax — air outside of his lung compressing his heart — and happened to be standing next to the digital X-ray machine as it was being taken,” he told Digital Trends. “Seeing the finding in real-time, I was able to immediately thrust a needle into his chest to evacuate the air, saving his life. I wondered if we could create an algorithm that could identify emergent findings so that radiographs don’t sit dormant in a database, waiting for a doctor to finally read the study and contact someone to take action.”

So what does this have to do with pictures of cats? Because Rajkomar said that deep learning systems need to train themselves by looking at vast numbers of images, but that the right kind of X-ray pictures aren’t in great supply for a variety of reasons.

“It wasn’t easy to collect that many radiology images, and even when we did, we discovered that incorrect metadata about the images made it difficult to harness in algorithms,” he continued.

Instead, he decided to have a go at plugging the holes with other images. Working with four Titan X GPUs and the CUDA parallel computing platform, he trained a deep learning neural network on more than 1 million color images taken from the ImageNet public database. After that, he then retrained the network by showing it a portion of the photos in grayscale, before doing it once more using actual chest X-rays.

Crazily enough, it worked. “We were able to show that by mixing thousands of radiology images with millions of images of everyday objects, like cats and fungi, we could get excellent performance,” he said.

You can find out more details about Rajkomar’s work by checking out his co-authored recent paper, published in the Journal of Digital Imaging, here.

“Our hope is that we can generate algorithms that will go much further than just creating metadata,” he noted, concerning his plans for what’s next. “I envision a future where we can immediately and automatically flag radiographs with critical findings so that patients can get the care they need without delay.”

And all this time you thought cat pictures were just cute, wasted bandwidth!

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Pentax K-1 Mark II uses AI to help you get the perfect shot
Pentax K1 Mark II front with lens



Read more
A beginner’s guide to A.I. superintelligence and ‘the singularity’
the singularity

Have you heard people talking about the technological singularity, either positively or negatively, but didn’t know enough to join in? Want to know if you should pack your bags and flee for the hills to escape the coming robot invasion? Or maybe join a church to welcome our new robot overlords? First, make sure to check out our beginner’s guide to all your singularity queries.
What exactly is the singularity?
The technological singularity, to use its full title, is a hypothesis predicted on the creation of artificial superintelligence. Unlike the “narrow” A.I. we have today -- which can be extremely good at carrying out one task, but can’t function in as many domains as a more generalized intelligence such as our own -- a superintelligence would possess abilities greater than our own. This would trigger a kind of tipping point in which enormous changes take place in human society.

With A.I., particularly deep learning neural networks, hitting new milestones on a seemingly daily basis, here in 2017 the idea doesn’t seem quite as science fiction as it once did.
Is this a new idea?
No. As is the case with a lot of A.I., these ideas have been circulating for awhile -- even though it’s only relatively recently that fields like deep learning have started to break through into the mainstream. I.J. Good, a British mathematician who worked with Alan Turing as a cryptologist during World War II, first suggested the concept of an intelligence explosion back in 1965.

Read more
AI assistants will soon recognize and respond to the emotion in your voice

You know when people say that it’s not what you say, but how you say it that matters? Well, very soon that could become a part of smart assistants such as Amazon’s Alexa or Apple’s Siri. At least, it could if these companies decide to use new technology developed by emotion tracking artificial intelligence company Affectiva.

Affectiva’s work has previously focused on identifying emotion in images by observing the way that a person’s face changes when they express particular sentiments. Affectiva’s latest technology builds on that premise through the creation of a cloud-based application program interface (API) that is able to detect emotion in speech. Developed using the power of deep learning technology, the smart tech is capable of observing changes in tone, volume, speed, and voice quality and using this to recognize emotions like anger, laughter, and arousal in recorded speech.

Read more