Skip to main content

A.I. detects skin cancer better than dermatologists in international study

Skin cancer detection won’t be turned over to machines anytime soon, but artificial intelligence detected skin cancer more accurately than a large group of international dermatologists in controlled testing, Agence France Presse reports.

In an academic study and clinical trial published in Annals of Oncology, the study’s lead author, Professor Holger A. Haenssle, of the University of Heidelberg Department of Dermatology, wrote, “Most dermatologists were outperformed by the CNN. Regardless of any physician’s level of experience, they may benefit from assistance by a CNN’s image classification.”

Man versus machine

The study pitted 58 dermatologists from 17 countries against a deep learning convolutional neural network (CNN).

Prior to the test, researchers from Germany, France, and the U.S.  taught the CNN to differentiate benign skin lesions from dangerous melanomas. In the process, the team showed more than 100,000 images of correctly identified skin cancers to the neural network, which was designed with Google’s Inception v4 CNN architecture.

The 58 dermatologists were divided into three self-identified groups: beginners with less than two years of experience, skilled with two to five years, and experts with more than five years of experience. There were 19 beginners, 11 skilled, and 30 experts among the group.

Two tests were run. In one test the dermatologists were shown 100 dermoscopic images with no other information. They were asked to indicate whether the cancer was a melanoma or benign. In addition, the doctors were asked whether they would recommend excision, short-term follow-up, or no action. Four weeks later the dermatologists were shown the same images again, this time with additional clinical information about the patients plus close-up images.

The results

The CNN scored higher than the overall group of dermatologists on both tests, with and without extra information. The dermatologists accurately identified an average of 86.5 percent of the skin cancers on the image-only test. In the second test, with more information, the doctors averaged 88.9 percent accuracy.  The CNN, however, correctly detected the types of cancers 95 percent of the time based on images only.

Rated by experience group, none of the three groups of dermatologists was as accurate as the neural network. The team did report, however, that 18 of the dermatologists scored higher than the CNN.

“The CNN missed fewer melanomas, meaning it had a higher sensitivity than the dermatologists,” Haenssle said. It also “misdiagnosed fewer benign moles as malignant melanoma … this would result in less unnecessary surgery.”

According to the authors of the study, the test does not mean machines will replace doctors. One issue is that melanomas can be difficult to recognize or image in some parts of the body such as the toes and scalp. The study calls for repeated, large-sized clinical tests.

The test does show, however, that dermatologists at all skill levels could benefit from A.I. assistance in skin cancer classification.

Editors' Recommendations

Bruce Brown
Digital Trends Contributing Editor Bruce Brown is a member of the Smart Homes and Commerce teams. Bruce uses smart devices…
Read the eerily beautiful ‘synthetic scripture’ of an A.I. that thinks it’s God
ai religion bot gpt 2 art 4

Travis DeShazo is, to paraphrase Cake’s 2001 song “Comfort Eagle,” building a religion. He is building it bigger. He is increasing the parameters. And adding more data.

The results are fairly convincing, too, at least as far as synthetic scripture (his words) goes. “Not a god of the void or of chaos, but a god of wisdom,” reads one message, posted on the @gods_txt Twitter feed for GPT-2 Religion A.I. “This is the knowledge of divinity that I, the Supreme Being, impart to you. When a man learns this, he attains what the rest of mankind has not, and becomes a true god. Obedience to Me! Obey!”

Read more
Google’s LaMDA is a smart language A.I. for better understanding conversation
LaMDA model

Artificial intelligence has made extraordinary advances when it comes to understanding words and even being able to translate them into other languages. Google has helped pave the way here with amazing tools like Google Translate and, recently, with its development of Transformer machine learning models. But language is tricky -- and there’s still plenty more work to be done to build A.I. that truly understands us.
Language Model for Dialogue Applications
At Tuesday’s Google I/O, the search giant announced a significant advance in this area with a new language model it calls LaMDA. Short for Language Model for Dialogue Applications, it’s a sophisticated A.I. language tool that Google claims is superior when it comes to understanding context in conversation. As Google CEO Sundar Pichai noted, this might be intelligently parsing an exchange like “What’s the weather today?” “It’s starting to feel like summer. I might eat lunch outside.” That makes perfect sense as a human dialogue, but would befuddle many A.I. systems looking for more literal answers.

LaMDA has superior knowledge of learned concepts which it’s able to synthesize from its training data. Pichai noted that responses never follow the same path twice, so conversations feel less scripted and more responsively natural.

Read more
How the USPS uses Nvidia GPUs and A.I. to track missing mail
A United States Postal Service USPS truck driving on a tree-lined street.

The United States Postal Service, or USPS, is relying on artificial intelligence-powered by Nvidia's EGX systems to track more than 100 million pieces of mail a day that goes through its network. The world's busiest postal service system is relying on GPU-accelerated A.I. systems to help solve the challenges of locating lost or missing packages and mail. Essentially, the USPS turned to A.I. to help it locate a "needle in a haystack."

To solve that challenge, USPS engineers created an edge A.I. system of servers that can scan and locate mail. They created algorithms for the system that were trained on 13 Nvidia DGX systems located at USPS data centers. Nvidia's DGX A100 systems, for reference, pack in five petaflops of compute power and cost just under $200,000. It is based on the same Ampere architecture found on Nvidia's consumer GeForce RTX 3000 series GPUs.

Read more