Skip to main content

New algorithm could make screening for cervical cancer cheaper and more accurate

screening ai cervical cancer 48615798 l
Bialasiewicz/123RF
The use of artificial intelligence in medicine isn’t just about top-flight university labs and pricey private hospitals; a group of researchers want to use it to help screen for cancer in the developing world.

“Cervical cancer is the second most common cancer to affect women,” Sharon Xiaolei Huang, associate professor of computer science and engineering at Lehigh University, told Digital Trends. “More than 80 percent of the deaths from cervical cancer occur in developing countries. The current screening methods — which include PAP smears, HPV tests, and other tests — often have low sensitivity. That means that a lot of patients, even if they go for screening, have their cancer undetected. That’s if they can have screenings at all, since the high cost can often prove prohibitive. Motivated by that, we saw that there was a call for a more sensitive, less expensive, and more highly automated screening method.”

That’s where the algorithm developed by Huang and her colleagues comes into play. Based on 10 years of work, their algorithm is able to recognize signs of cervical cancer based on noninvasive photos of the cervix. It was trained using data from 1,112 patient visits, of which 345 had lesions that were positive for dysplasia likely to develop into cancer, and 767 had lesions that did not fall into this category.

Searching for visual signs of cancer, the team’s algorithm turned out to have 10 percent better sensitivity and specificity than any other screening method, while also being lower cost. Its accuracy levels were upwards of 85 percent.

The researchers’ work is described in an article in the journal Pattern Recognition titled, “Multi-feature base benchmark for cervical dysplasia classification.” Next up, the team hopes to carry out trials using the AI system.

“There has been massive growth in AI technologies, especially over the past five or six years,” Huang said. “For example, we’ve seen a big increase in the recognition accuracy of image recognition systems. That’s very useful for medicine, which is what I’ve been working on. With recent developments, these tools are really starting to reach a point where they could be used in clinical settings. At the same time, there’s been more acceptance from clinicians and the general population about AI assisting with medicine.”

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Why teaching robots to play hide-and-seek could be the key to next-gen A.I.
AI2-Thor multi-agent

Artificial general intelligence, the idea of an intelligent A.I. agent that’s able to understand and learn any intellectual task that humans can do, has long been a component of science fiction. As A.I. gets smarter and smarter -- especially with breakthroughs in machine learning tools that are able to rewrite their code to learn from new experiences -- it’s increasingly widely a part of real artificial intelligence conversations as well.

But how do we measure AGI when it does arrive? Over the years, researchers have laid out a number of possibilities. The most famous remains the Turing Test, in which a human judge interacts, sight unseen, with both humans and a machine, and must try and guess which is which. Two others, Ben Goertzel’s Robot College Student Test and Nils J. Nilsson’s Employment Test, seek to practically test an A.I.’s abilities by seeing whether it could earn a college degree or carry out workplace jobs. Another, which I should personally love to discount, posits that intelligence may be measured by the successful ability to assemble Ikea-style flatpack furniture without problems.

Read more
RDNA 3 could make AMD’s Radeon RX 6900 XT successor 250% more powerful
amd ryzen 5000 announcement radeon oct 2020

Despite the fact that AMD had barely just announced its Radeon RX 6000 series graphics cards at the end of 2020 and the GPUs are still hard to find because of supply issues, innovation isn't stopping. AMD is already rumored to be working on its next-gen GPU that uses the company's RDNA 3 microarchitecture, which could give the graphics cards a performance lift of 2.5 times what is currently capable on the company's high-end Radeon RX 6900 XT today. The Radeon RX 6900 XT uses the same RDNA 2 architecture found on AMD's chips for the Xbox Series X and PlayStation 5 consoles.

The RDNA 3 microarchitecture is also known by Navi 31. Previously, it's been speculated that AMD could be adapting its use of chiplets from its Ryzen processors to its Radeon graphics chips to get more performance. This would be the first time that AMD would use chiplets on a graphics card, if these rumors prove accurate. The chiplet design is known as MCM, or multi-chip module.

Read more
New A.I. hearing aid learns your listening preferences and makes adjustments
Widex Moment hearing aids.

One of the picks for this year’s CES 2021 Innovation Awards is a smart hearing aid that uses artificial intelligence to improve the audio experience in a couple of crucial ways.

Among the improvements the Widex Moment makes to conventional hearing aids is reducing the standard sound delay experienced by wearers from 7 to 10 milliseconds seconds down to just 0.5 milliseconds. This results in a more natural sound experience for users, rather than the out-of-sync audio experience people have had to settle for up until now.

Read more