Skip to main content

Google’s A.I. can now detect breast cancer more accurately than doctors can

 

Google’s artificial intelligence technology is able to spot signs of breast cancer in women with more accuracy than doctors, according to a new study. 

Recommended Videos

The study, published on Wednesday, January 1, in the scientific journal Nature, found that by using A.I. technology, there was a reduction in false positives and false negatives when it came to diagnosing forms of breast cancer.

A.I. technology was used to look the mammograms from more than 15,000 women in the United States, and over 76,000 women in the United Kingdom. The program was able to reduce false positive by 5.7% for women in the U.S. and 1.2% for those in the U.K. False negatives were reduced 9.4% in the U.S. and by 2.7% in the U.K. 

The advanced A.I. system proved to be more accurate than human experts with knowledge of a patient’s history, even if doctors did a second reading of mammogram results, according to the study. 

“The performance of even the best clinicians leaves room for improvement,” the study reads. “A.I. may be uniquely poised to help with this challenge.”

The study was a collaboration between Google Health, Northwestern University, Cancer Research U.K. Imperial Centre, and Royal Surrey County Hospital. 

“Looking forward to future applications, there are some promising signs that the model could potentially increase the accuracy and efficiency of screening programs, as well as reduce wait times and stress for patients,” wrote Google Health’s Technical Lead, Shravya Shetty and Product Manager, Daniel Tse, in a blog post announcing the initial findings. 

The use of A.I. technology to better detect screenings could be groundbreaking, considering that one in eight U.S. women will develop breast cancer in their lifetime, according to the American Cancer Society. 

Aside from health applications, Google’s A.I. technology is also being applied for identifying different species of animals in a new wildlife conservation program that aims to better protect and monitor animals throughout the world. 

The ultimate goal for the data is to identify different species of animals faster. According to the blog, human experts can look through 300 to about 1,000 images per hour, but Google’s A.I. technology can analyze 3.6 million photos an hour and automatically classify an animal. 

Allison Matyus
Former Digital Trends Contributor
Allison Matyus is a general news reporter at Digital Trends. She covers any and all tech news, including issues around social…
Google just gave vision to AI, but it’s still not available for everyone
Gemini Live App on the Galaxy S25 Ultra broadcast to a TV showing the Gemini app with the camera feature open

Google has just officially announced the roll out of a powerful Gemini AI feature that means the intelligence can now see.

This started in March as Google began to show off Gemini Live, but it's now become more widely available.

Read more
This modular Pebble and Apple Watch underdog just smashed funding goals
UNA Watch

Both the Pebble Watch and Apple Watch are due some fierce competition as a new modular brand, UNA, is gaining some serous backing and excitement.

The UNA Watch is the creation of a Scottish company that wants to give everyone modular control of smartwatch upgrades and repairs.

Read more
Tesla, Warner Bros. dodge some claims in ‘Blade Runner 2049’ lawsuit, copyright battle continues
Tesla Cybercab at night

Tesla and Warner Bros. scored a partial legal victory as a federal judge dismissed several claims in a lawsuit filed by Alcon Entertainment, a production company behind the 2017 sci-fi movie Blade Runner 2049, Reuters reports.
The lawsuit accused the two companies of using imagery from the film to promote Tesla’s autonomous Cybercab vehicle at an event hosted by Tesla CEO Elon Musk at Warner Bros. Discovery (WBD) Studios in Hollywood in October of last year.
U.S. District Judge George Wu indicated he was inclined to dismiss Alcon’s allegations that Tesla and Warner Bros. violated trademark law, according to Reuters. Specifically, the judge said Musk only referenced the original Blade Runner movie at the event, and noted that Tesla and Alcon are not competitors.
"Tesla and Musk are looking to sell cars," Reuters quoted Wu as saying. "Plaintiff is plainly not in that line of business."
Wu also dismissed most of Alcon's claims against Warner Bros., the distributor of the Blade Runner franchise.
However, the judge allowed Alcon to continue its copyright infringement claims against Tesla for its alleged use of AI-generated images mimicking scenes from Blade Runner 2049 without permission.
Alcan says that just hours before the Cybercab event, it had turned down a request from Tesla and WBD to use “an icononic still image” from the movie.
In the lawsuit, Alcon explained its decision by saying that “any prudent brand considering any Tesla partnership has to take Musk’s massively amplified, highly politicized, capricious and arbitrary behavior, which sometimes veers into hate speech, into account.”
Alcon further said it did not want Blade Runner 2049 “to be affiliated with Musk, Tesla, or any Musk company, for all of these reasons.”
But according to Alcon, Tesla went ahead with feeding images from Blade Runner 2049 into an AI image generator to yield a still image that appeared on screen for 10 seconds during the Cybercab event. With the image featured in the background, Musk directly referenced Blade Runner.
Alcon also said that Musk’s reference to Blade Runner 2049 was not a coincidence as the movie features a “strikingly designed, artificially intelligent, fully autonomous car.”

Read more